Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arthur G. Samuel is active.

Publication


Featured researches published by Arthur G. Samuel.


Psychonomic Bulletin & Review | 2006

Generalization in perceptual learning for speech

Tanya Kraljic; Arthur G. Samuel

Lexical context strongly influences listeners’ identification of ambiguous sounds. For example, a sound midway between /f/ and /s/ is reported as /f/ in “sheri_’” but as /s/ in “Pari_.” Norris, McQueen, and Cutler (2003) have demonstrated that after hearing such lexically determined phonemes, listeners expand their phonemic categories to include more ambiguous tokens than before. We tested whether listeners adjust their phonemic categories for a specific speaker: Do listeners learn a particular speaker’s “accent”? Similarly, we examined whether perceptual learning is specific to the particular ambiguous phonemes that listeners hear, or whether the adjustments generalize to related sounds. Participants heard ambiguous /d/ or /t/ phonemes during a lexical decision task. They then categorized sounds on /d/-/t/ and /b/-/p/ continua, either in the same voice that they had heard for lexical decision, or in a different voice. Perceptual learning generalized across both speaker and test continua: Changes in perceptual representations are robust and broadly tuned.


Psychonomic Bulletin & Review | 2003

Inhibition of return: A graphical meta-analysis of its time course and an empirical test of its temporal and spatial properties

Arthur G. Samuel; Donna Kat

Immediately after a stimulus appears in the visual field, there is often a short period of facilitated processing of stimuli at or near this location. This period is followed by one in which processing is impaired, rather than facilitated. This impairment has been termed inhibition of return (IOR). In the present study, the time course of this phenomenon was examined in two ways. (1) A graphical metaanalysis plotted the size of the effect as a function of the stimulus onset asynchrony (SOA) of the two stimuli. This analysis showed that IOR is impressively stable for SOAs of 300-1,600 msec. It also showed that the literature does not provide any clear sense of the duration of IOR. (2) An empirical approach was, therefore, taken to fill this gap in our knowledge of IOR. In three experiments, IOR was tested using SOAs between 600 and 4,200 msec. IOR was robust for approximately 3 sec and appeared to taper off after this point; the observed duration varied somewhat as a function of the testing conditions. In addition, for the first second, the degree of inhibition was inversely related to distance of the target from the original stimulus, but for the next 2 sec this spatial distribution was not observed. Theories of the mechanisms and function of IOR must conform to these spatial and temporal properties.


Attention Perception & Psychophysics | 2009

Perceptual learning for speech

Arthur G. Samuel; Tanya Kraljic

Adult language users have an enormous amount of experience with speech in their native language. As a result, they have very well-developed processes for categorizing the sounds of speech that they hear. Despite this very high level of experience, recent research has shown that listeners are capable of redeveloping their speech categorization to bring it into alignment with new variation in their speech input. This reorganization of phonetic space is a type of perceptual learning, or recalibration, of speech processes. In this article, we review several recent lines of research on perceptual learning for speech.


Cognition | 2008

Accommodating variation: Dialects, idiolects, and speech processing

Tanya Kraljic; Susan E. Brennan; Arthur G. Samuel

Listeners are faced with enormous variation in pronunciation, yet they rarely have difficulty understanding speech. Although much research has been devoted to figuring out how listeners deal with variability, virtually none (outside of sociolinguistics) has focused on the source of the variation itself. The current experiments explore whether different kinds of variation lead to different cognitive and behavioral adjustments. Specifically, we compare adjustments to the same acoustic consequence when it is due to context-independent variation (resulting from articulatory properties unique to a speaker) versus context-conditioned variation (resulting from common articulatory properties of speakers who share a dialect). The contrasting results for these two cases show that the source of a particular acoustic-phonetic variation affects how that variation is handled by the perceptual system. We also show that changes in perceptual representations do not necessarily lead to changes in production.


Cognitive Psychology | 2007

Lexical configuration and lexical engagement: when adults learn new words.

Laura Leach; Arthur G. Samuel

People know thousands of words in their native language, and each of these words must be learned at some time in the persons lifetime. A large number of these words will be learned when the person is an adult, reflecting the fact that the mental lexicon is continuously changing. We explore how new words get added to the mental lexicon, and provide empirical support for a theoretical distinction between what we call lexical configuration and lexical engagement. Lexical configuration is the set of factual knowledge associated with a word (e.g., the words sound, spelling, meaning, or syntactic role). Almost all previous research on word learning has focused on this aspect. However, it is also critical to understand the process by which a word becomes capable of lexical engagement--the ways in which a lexical entry dynamically interacts with other lexical entries, and with sublexical representations. For example, lexical entries compete with each other during word recognition (inhibition within the lexical level), and they also support the activation of their constituents (top-down lexical-phonemic facilitation, and lexically-based perceptual learning). We systematically vary the learning conditions for new words, and use separate measures of lexical configuration and engagement. Several surprising dissociations in behavior demonstrate the importance of the theoretical distinction between configuration and engagement.


Psychological Science | 2008

First Impressions and Last Resorts How Listeners Adjust to Speaker Variability

Tanya Kraljic; Arthur G. Samuel; Susan E. Brennan

Perceptual theories must explain how perceivers extract meaningful information from a continuously variable physical signal. In the case of speech, the puzzle is that little reliable acoustic invariance seems to exist. We tested the hypothesis that speech-perception processes recover invariants not about the signal, but rather about the source that produced the signal. Findings from two manipulations suggest that the system learns those properties of speech that result from idiosyncratic characteristics of the speaker; the same properties are not learned when they can be attributed to incidental factors. We also found evidence for how the system determines what is characteristic: In the absence of other information about the speaker, the system relies on episodic order, representing those properties present during early experience as characteristic of the speaker. This “first-impressions” bias can be overridden, however, when variation is an incidental consequence of a temporary state (a pen in the speakers mouth), rather than characteristic of the speaker.


Language and Speech | 2004

Perception of Mandarin lexical tones when F0 information is neutralized

Siyun Liu; Arthur G. Samuel

In tone languages, the identity of a word depends on its tone pattern as well as its phonetic structure. The primary cue to tone identity is the fundamental frequency (F0) contour. Two experiments explore how listeners perceive Mandarin monosyllables in which all or part of the F0 information has been neutralized. In Experiment 1, supposedly critical portions of the tonal pattern were neutralized with signal processing techniques, yet identification of the tonal pattern remained quite good. In Experiment 2, even more drastic removal of tonal information was tested, using stimuli whispered by Mandarin speakers, or signal processed to remove the pitch cues. Again, performance was surprisingly good, showing that listeners can use secondary cues when the primary cue is unavailable. Moreover, a comparison of tone perception of naturally whispered monosyllables and the signal processed ones suggests that Mandarin speakers promote the utility of secondary cues when they know that the primary cue will be unavailable. The flexible use of cues to tone in Mandarin is similar to the flexibility that has been found in the production and perception of cues to phonetic identity in Western languages.


Cognitive Psychology | 1997

Lexical activation produces potent phonemic percepts.

Arthur G. Samuel

Theorists disagree about whether auditory word recognition is a fully bottom-up, autonomous process, or whether there is top-down processing within a more interactive architecture. The current study provides evidence for top-down lexical to phonemic activation. In several experiments, listeners labeled members of a /bI/-/dI/ test series, before and after listening to repeated presentations of various adapting sounds. Real English words (containing either a /b/ or a /d/) produced reliable adaptation shifts in labeling of the /bI/-/dI/ syllables. Critically, so did words in which the /b/ or /d/ was perceptually restored (when noise replaced the /b/ or /d/). Several control conditions demonstrated that no adaptation occurred when no phonemic restoration occurred. Similarly, no independent role in adaptation was found for lexical representations themselves. Thus, the results indicate that lexical activation can cause the perceptual process to synthesize a highly functional phonemic code. This result provides strong evidence for interactive models of word recognition.


Journal of Experimental Psychology: Human Perception and Performance | 1990

The use of rhythm in attending to speech

Mark A. Pitt; Arthur G. Samuel

Three experiments examined attentional allocation during speech processing to determine whether listeners capitalize on the rhythmic nature of speech and attend more closely to stressed than to unstressed syllables. Ss performed a phoneme monitoring task in which the target phoneme occurred on a syllable that was either predicted to be stressed or unstressed by the context preceding the target word. Stimuli were digitally edited to eliminate the local acoustic correlates of stress. A sentential context and a context composed of word lists, in which all the words had the same stress pattern, were used. In both cases, the results suggest that attention may be preferentially allocated to stressed syllables during speech processing. However, a normal sentence context may not provide strong predictive cues to lexical stress, limiting the use of the attentional focus.


Psychological Science | 2001

Knowing a Word Affects the Fundamental Perception of The Sounds Within it

Arthur G. Samuel

Understanding spoken language is an exceptional computational achievement of the human cognitive apparatus. Theories of how humans recognize spoken words fall into two categories: Some theories assume a fully bottom-up flow of information, in which successively more abstract representations are computed. Other theories, in contrast, assert that activation of a more abstract representation (e.g., a word) can affect the activation of smaller units (e.g., phonemes or syllables). The two experimental conditions reported here demonstrate the top-down influence of word representations on the activation of smaller perceptual units. The results show that perceptual processes are not strictly bottom-up: Computations at logically lower levels of processing are affected by computations at logically more abstract levels. These results constrain and inform theories of the architecture of human perceptual processing of speech.

Collaboration


Dive into the Arthur G. Samuel's collaboration.

Top Co-Authors

Avatar

Donna Kat

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar

Tanya Kraljic

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xujin Zhang

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Saioa Larraza

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

Lee H. Wurm

Wayne State University

View shared research outputs
Top Co-Authors

Avatar

Siyun Liu

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge