Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael K. Tanenhaus is active.

Publication


Featured researches published by Michael K. Tanenhaus.


Cognitive Psychology | 1982

Automatic access of the meanings of ambiguous words in context: Some limitations of knowledge-based processing ☆ ☆☆ ★ ★★

Mark S. Seidenberg; Michael K. Tanenhaus; James M. Leiman; Marie Bienkowski

Abstract Five experiments are described on the processing of ambiguous words in sentences. Two classes of ambiguous words (noun-noun and noun-verb) and two types of context (priming and nonpriming) were investigated using a variable stimulus onset asynchrony (SOA) priming paradigm. Noun-noun ambiguities have two semantically unrelated readings that are nouns (e.g., PEN, ORGAN); noun-verb ambiguities have both noun and verb readings that are unrelated (e.g., TIRE, WATCH). Priming contexts contain a word highly semantically or associatively related to one meaning of the ambiguous word; nonpriming contexts favor one meaning of the word through other types of information (e.g., syntactic or pragmatic). In nonpriming contexts, subjects consistently access multiple meanings of words and select one reading within 200 msec. Lexical priming differentially affects the processing of subsequent noun-noun and noun-verb ambiguities, yielding selective access of meaning only in the former case. The results suggest that meaning access is an automatic process which is unaffected by knowledge-based (“top-down”) processing. Whether selective or multiple access of meaning is observed largely depends on the structure of the ambiguous word, not the nature of the context.


Journal of Verbal Learning and Verbal Behavior | 1979

Evidence for Multiple Stages in the Processing of Ambiguous Words in Syntactic Contexts.

Michael K. Tanenhaus; James M. Leiman; Mark S. Seidenberg

A variable time delay naming latency paradigm was used to investigate the processing of noun—verb lexical ambiguities (e.g., watch) in syntactic contexts which either biased the noun or verb reading (e.g., I bought the watch; I will watch). Target words related to either the noun or verb reading were presented at 0, 200, and 600 msec following the sentence-final ambiguous word. At 0 msec, naming latencies related to either reading were facilitated regardless of the biasing context. By 200 msec, facilitation obtained only for targets related to the reading of the ambiguous word biased by the context. The results support a two-stage model in which all readings of ambiguous words are initially accessed and then the inappropriate readings are rapidly suppressed.


Cognitive Psychology | 2001

Time course of frequency effects in spoken-word recognition: evidence from eye movements.

Delphine Dahan; James S. Magnuson; Michael K. Tanenhaus

In two experiments, eye movements were monitored as participants followed spoken instructions to click on and move pictures with a computer mouse. In Experiment 1, a referent picture (e.g., the picture of a bench) was presented along with three pictures, two of which had names that shared the same initial phonemes as the name of the referent (e.g., bed and bell). Participants were more likely to fixate the picture with the higher frequency name (bed) than the picture with the lower frequency name (bell). In Experiment 2, referent pictures were presented with three unrelated distractors. Fixation latencies to referents with high-frequency names were shorter than those to referents with low-frequency names. The proportion of fixations to the referents and distractors were analyzed in 33-ms time slices to provide fine-grained information about the time course of frequency effects. These analyses established that frequency affects the earliest moments of lexical access and rule out a late-acting, decision-bias locus for frequency. Simulations using models in which frequency operates on resting-activation levels, on connection strengths, and as a postactivation decision bias provided further constraints on the locus of frequency effects.


Journal of Memory and Language | 2003

The effects of common ground and perspective on domains of referential interpretation

Joy E. Hanna; Michael K. Tanenhaus; John C. Trueswell

Abstract Addressees’ eye movements were tracked as they followed instructions given by a confederate speaker hidden from view. Experiment 1 used objects in common ground (known to both participants) or privileged ground (known to the addressee). Although privileged objects interfered with reference to an identical object in common ground, addressees were always more likely to look at an object in common ground than privileged ground. Experiment 2 used definite and indefinite referring expressions with early or late points of disambiguation, depending on the uniqueness of the display objects. The speaker’s and addressee’s perspectives matched when the speaker was accurately informed about the display, and mismatched when the speaker was misinformed. When perspectives matched, addressees identified the target faster with early than with late disambiguation displays. When perspectives mismatched, addressees still identified the target quickly, showing an ability to use the speaker’s perspective. These experiments demonstrate that although addressees cannot completely ignore information in privileged ground, common ground and perspective each have immediate effects on reference resolution.


Journal of Memory and Language | 2002

Accent and reference resolution in spoken-language comprehension

Delphine Dahan; Michael K. Tanenhaus; Craig G. Chambers

Abstract The role of accent in reference resolution was investigated by monitoring eye fixations to lexical competitors (e.g., candy and candle ) as participants followed prerecorded instructions to move objects above or below fixed geometric shapes using a computer mouse. In Experiment 1, the first utterance instructed participants to move one object above or below a shape (e.g., “Put the candle/candy below the triangle”) and the second utterance contained an accented or deaccented definite noun phrase which referred to the same object or introduced a new entity (e.g., “Now put the CANDLE above the square” vs. “Now put the candle ABOVE THE SQUARE”). Fixations to the competitor (e.g., candy ) demonstrated a bias to interpret deaccented nouns as anaphoric and accented nouns as nonanaphoric. Experiment 2 used only accented nouns in the second instruction, varying whether the referent of this second instruction was the Theme of the first instruction (e.g., “Put the candle below the triangle”) or the Goal of the first instruction (e.g., “Put the necklace below the candle”). Participants preferred to interpret accented noun phrases as referring to a previously mentioned nonfocused entity (the Goal) rather than as introducing a new unmentioned entity.


Language | 1988

Linguistic structure in language processing

Greg Carlson; Michael K. Tanenhaus

The Internal Structure of the Syllable.- Reading Complex Words.- A Synthesis of Some Recent Work in Sentence Production.- The Isolability of Syntactic Processing.- Neuropsychological Evidence for Linguistic Modularity.- Parsing Complexity and a Theory of Parsing.- Comprehending Sentences with Long-Distance Dependencies.- Thematic Structures and Sentence Comprehension.- Integrating Information in Text Comprehension: The Interpretation of Anaphoric Noun Phrases.- Index of Names.- Index of Subjects.


Journal of Psycholinguistic Research | 1995

Eye movements as a window into real-time spoken language comprehension in natural contexts.

Kathleen M. Eberhard; Michael J. Spivey-Knowlton; Julie C. Sedivy; Michael K. Tanenhaus

When listeners follow spoken instructions to manipulate real objects, their eye movements to the objects are closely time locked to the referring words. We review five experiments showing that this time-locked characteristic of eye movements provides a detailed profile of the processes that underlie real-time spoken language comprehension. Together, the first four experiments showed that listerners immediately integrated lexical, sublexical, and prosodic information in the spoken input with information from the visual context to reduce the set of referents to the intended one. The fifth experiment demonstrated that a visual referential context affected the initial structuring of the linguistic input, eliminating even strong syntactic preferences that result in clear garden paths when the referential context is introduced linguistically. We argue that context affected the earliest moments of language processing because it was highly accessible and relevant to the behavioral goals of the listener.


Journal of Experimental Psychology: Learning, Memory and Cognition | 1998

Syntactic Ambiguity Resolution in Discourse: Modeling the Effects of Referential Context and Lexical Frequency

Michael J. Spivey; Michael K. Tanenhaus

Sentences with temporarily ambiguous reduced relative clauses (e.g., The actress selected by the director believed that...) were preceded by discourse contexts biasing a main clause or a relative clause. Eye movements in the disambiguating region (by the director) revealed that, in the relative clause biasing contexts, ambiguous reduced relatives were no more difficult to process than unambiguous reduced relatives or full (unreduced) relatives. Regression analyses demonstrated that the effects of discourse context at the point of ambiguity (e.g., selected) interacted with the past participle frequency of the ambiguous verb. Reading times were modeled using a constraint-based competition framework in which multiple constraints are immediately integrated during parsing and interpretation. Simulations suggested that this framework reconciles the superficially conflicting results in the literature on referential context effects on syntactic ambiguity resolution.


Cognition | 2002

Gradient effects of within-category phonetic variation on lexical access

Bob McMurray; Michael K. Tanenhaus; Richard N. Aslin

In order to determine whether small within-category differences in voice onset time (VOT) affect lexical access, eye movements were monitored as participants indicated which of four pictures was named by spoken stimuli that varied along a 0-40 ms VOT continuum. Within-category differences in VOT resulted in gradient increases in fixations to cross-boundary lexical competitors as VOT approached the category boundary. Thus, fine-grained acoustic/phonetic differences are preserved in patterns of lexical activation for competing lexical candidates and could be used to maximize the efficiency of on-line word recognition.


Journal of Psycholinguistic Research | 2000

Eye movements and lexical access in spoken-language comprehension: evaluating a linking hypothesis between fixations and linguistic processing.

Michael K. Tanenhaus; James S. Magnuson; Delphine Dahan; Craig G. Chambers

A growing number of researchers in the sentence processing community are using eye movements to address issues in spoken language comprehension. Experiments using this paradigm have shown that visually presented referential information, including properties of referents relevant to specific actions, influences even the earliest moments of syntactic processing. Methodological concerns about task-specific strategies and the linking hypothesis between eye movements and linguistic processing are identified and discussed. These concerns are addressed in a review of recent studies of spoken word recognition which introduce and evaluate a detailed linking hypothesis between eye movements and lexical access. The results provide evidence about the time course of lexical activation that resolves some important theoretical issues in spoken-word recognition. They also demonstrate that fixations are sensitive to properties of the normal language-processing system that cannot be attributed to task-specific strategies.

Collaboration


Dive into the Michael K. Tanenhaus's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg Carlson

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ellen Campana

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Delphine Dahan

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John C. Trueswell

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge