Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Helena M. Saldaña is active.

Publication


Featured researches published by Helena M. Saldaña.


Journal of Experimental Psychology: Human Perception and Performance | 1996

An audiovisual test of kinematic primitives for visual speech perception.

Lawrence D. Rosenblum; Helena M. Saldaña

Isolated kinematic properties of visible speech can provide information for lip reading. Kinematic facial information is isolated by darkening an actors face and attaching dots to various articulators so that only moving dots can be seen with no facial features present. To test the salience of these images, the authors conducted experiments to determine whether the images could visually influence the perception of discrepant auditory syllables. Results showed that these images can influence auditory speech independently of the participants knowledge of the stimuli. In other experiments, single frozen frames of visible syllables were presented with discrepant auditory syllables to test the salience of static facial features. Although the influence of the kinematic stimuli was perceptual, any influence of the static featural stimuli was likely based on participants misunderstanding or postperceptual response bias.


Attention Perception & Psychophysics | 1993

Visual influences on auditory pluck and bow judgments

Helena M. Saldaña; Lawrence D. Rosenblum

In the McGurk effect, visual information specifying a speaker’s articulatory movements can influence auditory judgments of speech. In the present study, we attempted to find an analogue of the McGurk effect by using nonspeech stimuli—the discrepant audiovisual tokens of plucks and bows on a cello. The results of an initial experiment revealed that subjects’ auditory judgments were influenced significantly by the visual pluck and bow stimuli. However, a second experiment in which speech syllables were used demonstrated that the visual influence on consonants was significantly greater than the visual influence observed for pluck-bow stimuli. This result could be interpreted to suggest that the nonspeech visual influence was not a true McGurk effect. In a third experiment, visual stimuli consisting of the wordspluck andbow were found to have no influence over auditory pluck and bow judgments. This result could suggest that the nonspeech effects found in Experiment 1 were based on the audio and visual information’s having an ostensive lawful relation to the specified event. These results are discussed in terms of motor-theory, ecological, and FLMP approaches to speech perception.


Perception | 1993

Auditory Looming Perception: Influences on Anticipatory Judgments:

Lawrence D. Rosenblum; A. Paige Wuestefeld; Helena M. Saldaña

Several studies in the auditory-perception literature hint that listeners may be able to anticipate the time of arrival of an approaching sound source. Two experiments are reported in which listeners judged the time of arrival of an approaching car on the basis of various portions of its auditory signal. Subjects pressed a computer key to indicate when the car would have just passed them, assuming that the car maintained a constant approach velocity. A number of variables were tested including (a) the time between the offset of the signal and the virtual time of passage, (b) duration of the signal, and (c) feedback concerning judgment accuracy. Results indicate that increasing the time between signal offset and virtual time of passage decreases judgment accuracy whereas the actual duration of the signal had no significant effect. Feedback significantly improved performance overall.


Journal of the Acoustical Society of America | 1994

Selective adaptation in speech perception using a compelling audiovisual adaptor

Helena M. Saldaña; Lawrence D. Rosenblum

A replication of the audiovisual test of speech selective adaptation performed by Roberts and Summerfield [Percept. Psychophys. 30, 309-314 (1981)] was conducted. The audiovisual methodology allows for the dissociation of acoustic and phonetic components of an adapting stimulus. Roberts and Summerfields results have been interpreted to support an auditory basis for selective adaptation. However, their subjects did not consistently report hearing the adaptor as a visually influenced syllable making this interpretation questionable. In the present experiment, a more compelling audiovisual adaptor was implemented resulting in a visually influenced percept 99% of the time. Still, systematic adaptation occurred only for the auditory component.


Attention Perception & Psychophysics | 1992

Discrimination tests of visually influenced syllables

Lawrence D. Rosenblum; Helena M. Saldaña

In the McGurk effect, perception of audiovisually discrepant syllables can depend on auditory, visual, or a combination of audiovisual information. Undersome conditions, Vi8Ual information can override auditory information to the extent that identification judgments of a-visually influenced syllable can be as consistent as for an analogous audiovisually compatible syllable. This might indicate that visually influenced and analogous audiuvisually-compatible syllables-are-phictnetically equivalent. Experiments were designed to test this issue using a compelling visually influenced syllable in an AXB matching paradigm. Subjects were asked tomatch an audio syllable /val either to an audiovisually consistent syllable (audio /val-video /fa/) or an audiovisually discrepant syllable (audio /bs/-video ifa!). It was hypothesized that if the two audiovisual syllables were phonetically equivalent, then subjects should choose them equally often in the matching task. Results show, however, that subjects are more likely to match the audio /va/ to the audiovisually consistent /va/, suggesting differences in phonetic convincingness. Additional experiments further suggest that this preference is not based on a phonetically extraneous dimension or on noticeable relative audiovisual discrepancies.


Journal of Experimental Psychology: Human Perception and Performance | 1993

Dynamical constraints on pictorial action lines.

Lawrence D. Rosenblum; Helena M. Saldaña; Claudia Carello

Pictorial action lines are an effective way of portraying movement in a static drawing. When such lines emanate from the backs of characters, they can give a sense of the path or style of movement. Seven experiments assessed whether photographic streak lines-lines that depict actual movement paths-can, in and of themselves, be informative about the act that produced them. Lines were produced as a darkly clad actor with point-lights attached to his major joints performed a number of actions in front of an open-lens camera. Completely naive Ss had little success identifying events in these photographs. Once Ss were told of the photographic technique, however, striking proficiency was achieved. Subtle distinctions (e.g., whether the movement was forward or backward or performed while wearing weights) were made for some of the events. Results are discussed in terms of various treatments of pictorial information.


Journal of the Acoustical Society of America | 1994

The contribution of a reduced visual image to speech perception in noise

Jennifer A. Johnson; Lawrence D. Rosenblum; Helena M. Saldaña

It has long been known that seeing a talker’s face can improve the perception of speech in noise [A. MacLeod and Q. Summerfield, Br. J. Audiol. 21, 131–141]. Yet little is known about which characteristics of the face are useful for embellishing the degraded signal. Recently, a point‐light technique has been adopted to help isolate the salient aspects of a visible articulating face [Saldana et al., J. Acoust. Soc. Am. 92, 2340(A) (1992)]. In this technique, a speaker’s face is darkened and reflective dots are arranged on the lips, teeth, tongue, cheeks, and jaw. The actor is videotaped speaking in the dark so that when shown to subjects, only the moving dots are seen. In order to determine whether these reduced images could contribute to the perception of degraded speech, noise‐embedded sentences were dubbed with point‐light images at various signal‐to‐noise ratios. It was found that these images could improve comprehension depending on the number and location of points used. Implications of these results...


Journal of the Acoustical Society of America | 1994

Voice information in auditory form‐based priming

Helena M. Saldaña; Lawrence D. Rosenblum

In auditory form‐based priming, subjects are presented with speech stimuli that are phonetically related. Research has shown that a prime can facilitate recognition of a target item if prime and target share an initial phonetic segment. This facilitory effect has been considered either a result of residual activation of the initial phonetic representation [L. M. Slowiaczek and M. B. Hamburger, JEP: LMC 18, 1239–1250 (1992)] or post‐perceptual guessing strategy [Goldinger et al., JEP: LMC 18, 1211–1238 (1992)]. It could be, however, that the facilitation is at least in part based on nonlinguistic auditory information. Previous studies have always used the same speaker for prime and target items. In the present investigation, auditory form‐based priming experiments were conducted where voice information was changed from prime to target. Initial results reveal that varying voice information eliminates the facilitation effect. This could suggest that the effect is based on residual auditory activation. Follow...


Journal of the Acoustical Society of America | 1992

Visual influence on heard speech syllables with a reduced visual image

Helena M. Saldaña; Lawrence D. Rosenblum; Theresa Osinga

Visual information of a speaker’s articulations can influence heard speech syllables [H. McGurk and J. McDonald, Nature 264, 746–748 (1976)]. The strength of this so‐called McGurk effect was tested using a highly reduced visual image. A point‐light technique was adopted whereby an actor’s face was darkened and reflective dots were arranged on various parts of the actor’s lips, teeth, tongue, and jaw. The actor was videotaped producing syllables in the dark. These reduced visual stimuli were dubbed onto discrepant auditory syllables in order to test their visual influence. Although subjects could not identify a frozen frame of these stimuli as a face, dynamic presentations resulted in a significant visual influence on syllable identifications. These results suggest that ‘‘pictorial’’ facial features are not necessary for audiovisual integration in speech perception. The results will be discussed in terms of the ecological approach, the fuzzy logical model, and the motor theory of speech perception.


Journal of the Acoustical Society of America | 1995

The effects of talker‐specific information on immediate memory span

Helena M. Saldaña

Recent evidence suggests that talker‐specific information is retained along with codes for words and phonemes in long‐term memory [Palmeri et al., JEP:LMC 19, 309–328 (1993); J. W. Mullenix and D. B. Pisoni, 365–378 (1990)]. If this indexical information is retained, one would expect talker‐specific information to also effect a listener’s performance on short‐term memory tasks. However, the predominant trace decay theories of short‐term memory do not predict this. For example, in the articulatory loop model, items are stored as memory traces that fade after approximately 2 s unless revived by an articulatory control process [A. D. Baddeley and G. J. Hitch, Psych. Learn. Motiv. (1974)]. The number of items that can be reactivated within the decay time can be retained indefinitely. Therefore, immediate memory span is defined in terms of time or duration of items. In the present investigation, memory span experiments were conducted in which voice information was held constant or was changed for each item in ...

Collaboration


Dive into the Helena M. Saldaña's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claudia Carello

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge