Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruth S. Day is active.

Publication


Featured researches published by Ruth S. Day.


Attention Perception & Psychophysics | 1975

Failure of selective attention to phonetic segments in consonant-vowel syllables

Charles C. Wood; Ruth S. Day

Subjects performed a two-choice speeded classification task that required selective attention to either the consonant or the vowel in synthetic consonant-vowel (CV) syllables. When required to attend selectively to the consonant, subjects could not ignore irrelevant variation in the vowel. Similarly, when required to attend selectively to the vowel, they could not ignore irrelevant variation in the consonant. These results suggest that information about an initial stop consonant and the following vowel is processed as an integral unit.


Journal of the Acoustical Society of America | 1972

Separate Speech and Nonspeech Processing in Dichotic Listening

Ruth S. Day; James C. Bartlett

Temporal order judgment (TOJ) in dichotic listening can be a difficult task. Previous experiments that used two speech stimuli on each trial (S/S) obtained sizable error rates when subjects were required to report which ear led (TOJ‐by‐ear). When subjects were required to identify the leading stimulus (TO J‐by‐stimulus), the error rate increased substantially. Apparently, the two speech stimuli were competing for analysis by the same processor, and so were overloading it. The present experiment used the same TOJ tasks, but presented a speech and a nonspeech stimulus on each trial (S/NS). The error rate was comparable to that of S/S for TO J‐by‐ear, but did not increase for TO J‐by‐stimulus. This would be expected if the speech and nonspeech stimuli are being sent to different processors, each of which performs its analysis without interference from the other. The interpretation of the data given here is consistent with the results of standard identification experiments reported elsewhere: when asked to identify both stimuli on each dichotic trial, subjects made many errors on S/S, while performance was virtually error‐free on S/NS.


Journal of Experimental Psychology: Human Perception and Performance | 1976

Processing Two Dimensions of Nonspeech Stimuli: The Auditory-Phonetic Distinction Reconsidered

Mark J. Blechner; Ruth S. Day; James E. Cutting

Nonspeech stimuli were varied along two dimensions--intensity and rise time. In a series of speeded classification tasks, subjected were asked to identify the stimuli in terms of one of these dimensions. Identification time for the dimension of rise time increased when there was irrelevant variation in intensity; however, identification of intensity was unaffected by irrelevant variation in rise time. When the two dimensions varied redundantly, identification time decreased. This pattern of results is virtually identical to that obtained previously for stimuli that vary along a linguistic and a nonlinguistic dimension. The present data, taken together with those from other studies using the same stimuli, suggest that the mechanisms underlying the auditory-phonetic distinction should be reconsidered. The results are also discussed in terms of general models of multidimensional information processing.


Journal of the Acoustical Society of America | 1972

Mutual Interference between Two Linguistic Dimensions of the Same Stimuli

Ruth S. Day; Charles C. Wood

In a previous study subjects identified binaural stimuli that varied along both a linguistic and a nonlinguistic dimension. The linguistic dimension consisted of variation in stop consonants while the nonlinguistic dimension consisted of variation in fundamental frequency. There were four stimuli: /ba/—low, /ba/ high, /da/—low, /da/—high. Reaction times were obtained in a two‐choice identification task when the target dimension was the only one that varied. When there was also irrelevant variation in the nontarget dimension, reaction times increased substantially for the linguistic dimension, but only slightly for the nonlinguistic dimension. Thus the nonlinguistic dimension interfered with the processing of the linguistic dimension more than vice versa. The present study employed the same paradigm, but used two linguistic dimensions: stop consonants and vowels. The stimuli were /ba, bae, da, dae/. Reaction times increased substantially for both dimensions when there was also irrelevant variation in the non...


Journal of the Acoustical Society of America | 1973

Digit Span Memory in Language‐Bound and Stimulus‐Bound Subjects

Ruth S. Day

Dichotic tests involving phonological fusion yield bimodal individual differences. Given items of the general form BANKET/LANKET, some subjects report hearing BLANKET. They report such fusions no matter which item began first. When specifically asked to report the leading phoneme, they report /b/ even when /l/ led by a substantial interval. Since they are reporting the phonological order permitted in English rather than the actual physical events, they have been termed “language bound.” Other subjects, termed “stimulus bound,” do not fuse and are better able to determine which phoneme led. Language‐bound and stimulus‐bound subjects, as defined by the dichotic fusion tests, took some digit‐span memory tests. Nine digits were presented auditorily on each trial and subjects had to recall each item in the appropriate serial position. Stimulus‐bound subjects displayed significantly superior memory. In digit‐span experiments, performance is typically best at the beginning and end of the list, with a sizeable dr...


Journal of the Acoustical Society of America | 1970

Temporal Order Perception of a Reversible Phoneme Cluster

Ruth S. Day

The synthetic syllable /taes/ was presented to one ear, while at the same time /taek/ was presented to the other ear. On some trials, both syllables began at the same time, while on others /taes/ led by 5, 10, 15,⋯, 100 msec, or /taek/ led by these same intervals. When asked to report “what they heard,” subjects often reported /taesk/ or /taeks/. Later, when asked to report “the last sound they heard,” subjects performed well on both /s/ and /k/. These results contrast with those of a previous study involving nonreversible clusters: when asked to report the first phoneme of the dichotic pair /baeek/‐/laeek/, subjects reported hearing /b/ first, independent of the lead conditions presented. A tentative model for temporal factors in speech perception is proposed. It incorporates considerations of the effects of both parallel and serial transmission of phonemic information. [This research was supported in part by a grant from NICHD to the Haskins Laboratories.]


Journal of the Acoustical Society of America | 1973

A Parallel between Degree of Encodedness and the Ear Advantage: Evidence from a Temporal‐Order Judgment Task

Ruth S. Day; James M. Vigorito

Some speech sounds are more highly encoded than others. In acoustic terms, this means that they undergo more restructuring as a function of neighboring phonemes. In psychological terms, it may mean that special processing is required to perceive them. Stop consonants appear to be the most highly encoded speech sounds, vowels the least encoded, with other sounds falling in the middle. Stops, liquids, and vowels served as target phonemes in tests of dichotic temporal‐order judgment (TOJ). A different syllable was presented to each ear with one leading by 50 msec, e.g., BAE(50)/GAE. Subjects reported which syllable began first. Ear difference scores were obtained by taking percent‐correct TOJ on trials where a given ear received the leading stimulus and subtracting percent‐correct TOJ on trials where the other ear led. Stop consonant pairs yielded a right‐ear score, liquids a reduced right‐ear score, and vowels a left‐ear score. A right‐ear advantage in dichotic listening is usually interpreted as reflecting...


Journal of the Acoustical Society of America | 1971

Perceptual Competition between Speech and Nonspeech

Ruth S. Day; James E. Cutting

In contrast with previous dichotic listening experiments that delivered either two speech messages (speech/speech) or two nonspeech messages (nonspeech/nonspeech) on each trial, the present study used mixed trials: speech to one ear and non‐speech to the other ear (speech/nonspeech). The relative onset time of each dichotic pair was varied over a ±200‐msec range. Subjects were asked to report which stimulus began first on every trial. Processing Time: If, according to a speech mode hypothesis, there is a set of processors specialized for speech, then speech stimuli might well require more processing time than nonspeech stimuli. The data support this view: subjects accurately reported hearing a nonspeech stimulus first when it led by a small time interval, whereas they were unable to determine that a speech stimulus began first unless it led by a much greater interval. Ear Advantage: Other studies have found a right‐ear superiority for speech/speech presentations and a left‐ear superiority for nonspeech/no...


Journal of the Acoustical Society of America | 1972

Dichotic Fusion along an Acoustic Continuum

James E. Cutting; Ruth S. Day

When stimuli such as banket and lanket are presented dichotically, phonemic fusions often occur: subjects report hearing blanket. Previous studies have shown that stop +/r/ and stop +/l/ items have different fusion properties. For example, /l/ was sometimes substituted for /r/ (but rarely vice versa): gocery/rocery → (yielded) glocery. The present experiment varied the liquid stimuli along an acoustic continuum involving the third formant transition. For example, one set varied from ray to lay. Each was paired dichotically with an initial stop stimulus, in this case pay. All inputs (pay, ray, lay) and possible fusions (pray, play) were acceptible English words. When asked to report “what they heard,” subjects gave many fusion responses. Of these, there was a preponderance of stop +/l/ fusions (88% vs 12%). They occurred even for pairs where the liquid item was reported as an /r/ during separate binaural identification trials. Thus, given that an item was identified as ray, the same subjects reported heari...


Journal of the Acoustical Society of America | 1974

Differences between Language‐Bound and Stimulus‐Bound Subjects in Solving Word Search Puzzles

Ruth S. Day

Studies of dichotic fusion suggest that “language‐bound” (LB) subjects perceive speech sounds through the abstract linguistic structure of their language, while “stimulus‐bound” (SB) subjects can set aside linguistic rules and make accurate judgments about nonlinguistic events. In the present experiment, subjects of both types were asked to scan a matrix of letters in all directions in order to find words that exemplify a particular theme, e.g., musical instruments. SBs consistently found more words. Perhaps SBs simply have better spatial abilities, since the task requires scanning in eight directions. An alternative view is that the groups have comparable spatial abilities, but that LBs are preoccupied with linguistic operations: given a string of letters, they translate it into “phonetic sense” no matter what direction they happen to scan. For example, the highly pronounceable string TENIPS may obscure the fact that SPINET is spelled out in the reverse direction. Hence the two groups may differ in the r...

Collaboration


Dive into the Ruth S. Day's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Curry I. Guinn

University of North Carolina at Wilmington

View shared research outputs
Top Co-Authors

Avatar

Diana L. Blithe

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Erin Berry-Bibee

Centers for Disease Control and Prevention

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge