James V. Ralston
Indiana University Bloomington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by James V. Ralston.
Human Factors | 1991
James V. Ralston; David B. Pisoni; Scott E. Lively; Beth G. Greene; John W. Mullennix
Previous comprehension studies using postperceptual memory tests have often reported negligible differences in performance between natural speech and several kinds of synthetic speech produced by rule, despite large differences in segmental intelligibility. The present experiments investigated the comprehension of natural and synthetic speech using two different on-line tasks: word monitoring and sentence-by-sentence listening. On-line task performance was slower and less accurate for passages of synthetic speech than for passages of natural speech. Recognition memory performance in both experiments was less accurate following passages of synthetic speech than of natural speech. Monitoring performance, sentence listening times, and recognition memory accuracy all showed moderate correlations with intelligibility scores obtained using the Modified Rhyme Test. The results suggest that poorer comprehension of passages of synthetic speech is attributable in part to the greater encoding demands of synthetic speech. In contrast to earlier studies, the present results demonstrate that on-line tasks can be used to measure differences in comprehension performance between natural and synthetic speech.
Journal of the Acoustical Society of America | 1994
James V. Ralston; Mary Tse; Emily R. Campbell; Alison D. Wright; Tracey L. Fisher; Michael McCall
Ten males in each of three age groups (M=12 years, M=22 years, M=74 years, respectively) were recorded while reading lists of 16 words and 16 sentences. The words and sentences were digitized, segmented, and saved on disk. In one experiment, a sample of the words was presented in random order to listeners who judged the age of the speakers. Listeners overestimated the ages of the young voices by 1 year, overestimated the age of the college‐age voices by about 9 years, and underestimated the age of the elder voices by almost 20 years. An assimilation effect was also observed, such that age estimates of voices were shifted toward the ages of speakers on the previous trial. Finally, there was a significant correlation between age estimates and the duration of the words. A second experiment utilizing the same methods with isolated sentences revealed smaller estimation errors and smaller assimilation effects. These results, as well as those from ongoing experiments, will be discussed in light of their basic an...
Journal of the Acoustical Society of America | 1990
James V. Ralston; Keith Johnson
There has been continuing debate regarding claims that human listeners process tonal stimuli differently as a function of perceptual set. To address this issue, the identification of sinusoidal stimuli in subjects reporting either speech or nonspeech percepts was examined. One set of stimuli were constructed from sinusoids modeled after the formants present in a [wi‐ju] continuum. Subjects reliably classified stimuli with two response categories, regardless of their perceptual set. Probit analysis revealed steeper slope for speech labeling functions. Item‐by‐item analysis revealed larger effects of stimulus context for nonspeech listeners. Finally, increases in response latencies were larger for speech listeners for stimuli near the labeling category boundary. The major trends observed with the first set of stimuli were also obtained with smaller magnitudes with a second, simpler set of stimuli ([ae‐uh]). Taken together, these results suggest that speech listeners process sinusoidal stimuli in a qualitatively different manner than nonspeech listeners. Speech listeners appear to weight stable internal criteria more heavily than nonspeech listeners, who weight stimulus context more heavily.
Journal of the Acoustical Society of America | 1988
Sarah Partan; Kristin K. Jerger; Mark J. Xitco; Herbert L. Roitblat; Louis M. Herman; Humphrey Williams; Michael Hoffhines; John D. Gory; James V. Ralston
The vocalizations of two Atlantic bottlenose dolphins (Tursiops truncatus) were recorded during a study of behavioral mimicry. Underwater recordings were made on one channel of a stereo magnetic tape, while a behavioral trial‐by‐trial narrative was recorded onto a second channel. The tapes were analyzed by human listeners blind to the behavior performed. The vocalizations were divided into categories of pulsed versus nonpulsed (whistle) sounds. There was a significant dependency between vocalizations produced and behaviors performed (χ2 = 320.48, df = 60, p < 0.00001). This finding suggests that the dolphins may communicate behavior specific information in their vocalizations. The results of current spectrographic and further behavioral studies will also be presented, with the objective of determining whether the vocalizations are communicative or are artifacts of the behavior.
Journal of the Acoustical Society of America | 1996
James V. Ralston; Jarad M. Plesser; Elizabeth A. Lawless
The present research investigates the effect of semantic content of an utterance on age perception. On each trial of two experiments, listeners heard either a relatively old or young voice uttering a word or nonword. Listeners rapidly decided whether the speaker was relatively young or old. One experiment included five repetitions of the words ‘‘young’’ and ‘‘old,’’ and a nonword spoken by a college‐aged and a middle‐aged female. Analyses of reaction times for correct responses revealed a Stroop effect: Responses were quicker when the semantic information and the age of the speaker were consistent than when they were inconsistent. Overall reaction times, but not the Stroop effect, decreased across repetitions. A second experiment utilized the words ‘‘young’’ and ‘‘old,’’ age‐associates, age‐neutral words, and nonwords, all recorded from a college‐aged and a middle‐aged female. Analyses of reaction times revealed a Stroop effect for the semantic associates. In addition, there was a strong trend for respons...
Journal of the Acoustical Society of America | 1996
James V. Ralston; George Fidas; Christin E. Agli; Colleen Coppola
Previous research has shown that relatively localized acoustic information, such as voice pitch, f0 perturbation, and hoarseness are cues to age perception. The present studies examined the potential role of sentence‐level prosodic information in the perception of age. Sixteen different six‐word sentences were recorded from eight different males in each of three age groups (middle school students, college students, and senior citizens). Several acoustic measurements were made of each sentence, including mean frequency, slope of intonation contour across entire sentence, slope of terminal fall, and amount of inflection (i.e., absolute frequency change summed across the sentence). A second set of sentences was constructed by randomly reordering the words in each sentence. Original sentences and their scrambled counterparts were presented to 19 listeners who rated the age in years of the speaker of each sentence. Analyses of variance revealed that age judgments were less extreme for scrambled sentences as co...
Journal of the Acoustical Society of America | 1992
James V. Ralston; Kathryn F. Gage; Jeffrey G. Harris; Sean P. Brooks; Louis M. Herman
It was previously reported [Ralston et al., J. Acoust. Soc. Am. Suppl. 1 84, S77 (1988)] that a dolphin appeared to perceive relative pitch in sequences of four pure tones. At any point in those studies, all stimuli were pitch transpositions of one of two contours. The dolphin generally whistled after presentation of one contour and remained silent after presentation of the other contour. Recent analyses have examined whistle probability, whistle latency, and whistle duration as a function of the absolute pitch of stimuli. Generally, as the absolute pitch of the stimuli increased, the probability of whistling increased and the whistle latency decreased. The combined results suggest that the dolphin judged both the absolute and relative pitch of the stimuli. Consistent with Ridgway et al. [J. Acoust. Soc. Am. 89, 1967 (A) (1991)], the present results also suggest that whistle responses provide a sensitive index of perceptual processing and are an appropriate modality in choice response paradigms with dolphins.
Journal of the Acoustical Society of America | 1992
James V. Ralston; John Schwartz; L. George Van Son
An interactive hypercard stack is being developed for pedagogy in several departments at a liberal arts college. The stack is intended for use as a supplement to introductory level courses in physics, psychology, speech pathology, audiology, and linguistics. Therefore, the stack is designed for students with no background in acoustic‐phonetics, mathematics, or physics. At present, the stack is primarily concerned with vowel sounds. The three main content domains of the stack are speech production, auditory/speech perception, and the physics of sound. In addition, there is a hypertext‐oriented glossary and a corpus of speech sounds with associated articulatory and acoustic information.
Journal of the Acoustical Society of America | 1989
James V. Ralston; John W. Mullennix; Beth G. Greene; Scott E. Lively
Accumulated perceptual research comparing natural and synthetic speech indicates relatively large differences in tasks assessing acoustic‐phonetic processing, and small differences in tasks assessing higher levels of processing related to comprehension. Studies comparing comprehension of passages of fluent natural and synthetic speech have generally examined performance on questions presented after subjects have listened to a passage. Such postperceptual measures are known to be relatively insensitive to differences in “real‐time” processing operations. The present investigation employed an “on‐line” measure of processing, i.e., word monitoring, to study comprehension. Subjects in these studies were presented with three types of passages—(1) natural speech, (2) high‐quality synthetic speech, and (3) low‐quality synthetic speech—and were required to monitor for target words as well as verify postperceptual comprehension questions. Monitoring latencies and verification performance will be discussed in terms...
Journal of the Acoustical Society of America | 1988
James V. Ralston; Louis M. Herman; Humphrey Williams; John D. Gory; Kristin K. Jerger
Previous studies with monkeys and songbirds indicate a limited capability to recognize interval or contour information in melodic sequences of discrete sinusoidal tones. In the present experiments, two different types of melodic sequences, either four‐tone sequences with constant pitch or four‐tone sequences with descending pitch, were projected underwater to a 12‐year‐old adult female Atlantic bottlenose dolphin. The initial training attempted to teach the dolphin to press specific paddles after presentation of stimuli. Instead, the dolphin began to whistle spontaneously after descending‐pitch sequences and remained silent after constant‐pitch sequences. Over the same trials, she responded at chance with paddle presses. The paddles were removed, and vocal responses were thereafter judged in real‐time by “blind” listeners. In subsequent tests, the dolphin successfully transferred the vocal responses to several pitch‐transposed stimuli drawn from within, as well as a full octave above, the pitch range of t...