Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerry T. M. Altmann is active.

Publication


Featured researches published by Gerry T. M. Altmann.


Journal of Memory and Language | 2003

The time-course of prediction in incremental sentence processing: Evidence from anticipatory eye movements

Yuki Kamide; Gerry T. M. Altmann; Sarah L. Haywood

Three eye-tracking experiments using the ‘visual-world’ paradigm are described that explore the basis by which thematic dependencies can be evaluated in advance of linguistic input that unambiguously signals those dependencies. Following Altmann and Kamide (1999), who found that selectional information conveyed by a verb can be used to anticipate an upcoming Theme, we attempt to draw here a more precise picture of the basis for such anticipatory processing. Our data from two studies in English and one in Japanese suggest that (a) verb-based information is not limited to anticipating the immediately following (grammatical) object, but can also anticipate later occurring objects (e.g., Goals), (b) in combination with information conveyed by the verb, a pre-verbal argument (Agent) can constrain the anticipation of a subsequent Theme, and (c) in a head-final construction such as that typically found in Japanese, both syntactic and semantic constraints extracted from pre-verbal arguments can enable the anticipation, in effect, of a further forthcoming argument in the absence of their head (the verb). We suggest that such processing is the hallmark of an incremental processor that is able to draw on different sources of information (some non-linguistic) at the earliest possible opportunity to establish the fullest possible interpretation of the input at each moment in time.


Ai Magazine | 1989

Cognitive Models of Speech Processing: Psycholinguistic and Computational Perspectives

Gerry T. M. Altmann

The 1988 Workshop on Cognitive Models of Speech Processing was held at Park Hotel Fiorelle, Sperlonga, Italy, on 16-20 May 1988. Twenty-five participants gathered in this small coastal village, where the Emperor Tiberius once kept a summer house, to discuss psycholinguistic and computational issues in speech and natural language processing.


Journal of Memory and Language | 1992

Avoiding the garden path : eye movements in context

Gerry T. M. Altmann; Alan Garnham; Yvette Dennis

Pragmatic factors, such as referential context, influence the decisions of the syntactic processor. At issue, however, has been whether such effects take place in the first or second pass analysis of the sentence. It has been suggested that eye movement studies are the only appropriate means for deciding between first and second pass effects. In this paper, we report two experiments using ambiguous relative/complement sentences and unambiguous controls. In Experiment 1 we show that referential context eliminates all the first pass reading time differences that are indicative of a garden path to the relative continuation in the null context. We observe, however, that the context does not eliminate the increased proportion of regressions from that disambiguating continuation. We therefore introduce a regression-contingent analysis of the first pass reading times and show that this new measure provides an important tool for aiding in the interpretation of the apparently conflicting data. Experiment 2 investigated whether the results of Experiment 1 were an artifact of the kinds of questions about the contexts that were asked in order to encourage subjects to attend to the contexts. The results demonstrated that the use of explicity referential questions had little effect. There was some small evidence for a garden path effect in this second experiment, but the regression-contingent measure enabled us to locate all garden path effects in only a small proportion of trials and to conclude that context does influence the initial decisions of the syntactic processor.


Cognition | 2009

Discourse-mediation of the mapping between language and the visual world: Eye movements and mental representation

Gerry T. M. Altmann; Yuki Kamide

Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either ‘The woman will put the glass on the table’ or ‘The woman is too lazy to put the glass on the table’. Subsequently, with the scene unchanged, participants heard that the woman ‘will pick up the bottle, and pour the wine carefully into the glass.’ Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after ‘pour’ (anticipating the glass) and at ‘glass’ reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).


Attention Perception & Psychophysics | 1988

The recognition of words after their acoustic offsets in spontaneous speech: Effects of subsequent context

Ellen Gurman Bard; Richard Shillcock; Gerry T. M. Altmann

Three experiments are presented that investigated the recognition of wordsafter their acoustic offsets in conversational speech. Utterances rand omly selected from the speech of 24 individuals (totalN=288) were gated in one-word increments and heard by 12 listeners each. of the successful recognitions, 21% occurred after the acoustic offset of the word in question and in the presence of subsequent context. The majority of late recognitions implicate subsequent context in the recognition process. Late recognitions were distributed nonrand omly with respect to the characteristics of the stimulus word tokens. Control experiments demonstrated that late recognitions were not artifacts of eliminating discourse context, of imposing artificial word boundaries, or of repeating words within successive gated presentations. The effects could be replicated only if subsequent context was available. The implications are discussed for models of word recognition in continuous speech.


Cognitive Science | 1999

Mapping across domains without feedback: A neural network model of transfer of implicit knowledge

Zoltan Dienes; Gerry T. M. Altmann; Shi-Ji Gao

This paper shows how a neural network can model the way people who have acquired knowledge of an artificial grammar in one perceptual domain (e.g., sequences of tones differing in pitch) can apply the knowledge to a quite different perceptual domain (e.g., sequences of letters). It is shown that a version of the Simple Recurrent Network (SRN) can transfer its knowledge of artificial grammars across domains without feedback. The performance of the model is sensitive to at least some of the same variables that affect subjects’ performance-for example, the model is responsive to both the grammaticality of test sequences and their similarity to training sequences, to the cover task used during training, and to whether training is on bigrams or larger sequences.


Visual Cognition | 2007

Visual-shape competition during language-mediated attention is based on lexical input and not modulated by contextual appropriateness

Falk Huettig; Gerry T. M. Altmann

Visual attention can be directed immediately, as a spoken word unfolds, towards conceptually related but nonassociated objects, even if they mismatch on other dimensions that would normally determine which objects in the scene were appropriate referents for the unfolding word (Huettig & Altmann, 2005). Here we demonstrate that the mapping between language and concurrent visual objects can also be mediated by visual-shape relations. On hearing “snake”, participants directed overt attention immediately, within a visual display depicting four objects, to a picture of an electric cable, although participants had viewed the visual display with four objects for approximately 5 s before hearing the target word—sufficient time to recognize the objects for what they were. The time spent fixating the cable correlated significantly with ratings of the visual similarity between snakes in general and this particular cable. Importantly, with sentences contextually biased towards the concept snake, participants looked at the snake well before the onset of “snake”, but they did not look at the visually similar cable until hearing “snake”. Finally, we demonstrate that such activation can, under certain circumstances (e.g., during the processing of dominant meanings of homonyms), constrain the direction of visual attention even when it is clearly contextually inappropriate. We conclude that language-mediated attention can be guided by a visual match between spoken words and visual objects, but that such a match is based on lexical input and may not be modulated by contextual appropriateness.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2001

Two modes of transfer in artificial grammar learning.

Richard J. Tunney; Gerry T. M. Altmann

Participants can transfer grammatical knowledge acquired implicitly in 1 vocabulary to new sequences instantiated in both the same and a novel vocabulary. Two principal theories have been advanced to account for these effects. One suggests that sequential dependencies form the basis for cross-domain transfer (e.g., Z. Dienes, G. T. M. Altmann, & S. J. Gao, 1999). Another argues that a form of episodic memory known as abstract analogy is sufficient (e.g., L. R. Brooks & J. R. Vokey, 1991). Three experiments reveal the contributions of the 2. In Experiment 1 sequential dependencies form the only basis for transfer. Experiment 2 demonstrates that this process is impaired by a change in the distributional properties of the language. Experiment 3 demonstrates that abstract analogy of repetition structure is relatively immune to such a change. These findings inform theories of artificial grammar learning and the transfer of grammatical knowledge.


Language and Cognitive Processes | 1988

Ambiguity, parsing strategies, and computational models

Gerry T. M. Altmann

Abstract A variety of computational models have been developed in recent years to model the behaviour of the Human Sentence Processing Mechanism (HSPM) when it encounters local syntactic ambiguities. The majority of these incorporate the assumption that the HSPM makes its initial choice of analysis according to a small number of exclusively syntactic principles. The arguments in favour of this structural approach range from the computational efficiency of parsers incorporating these structurally based parsing strategies to the empirical evidence which has been claimed to refute the alternative interactive account of ambiguity resolution, in which contextual information can be used in order to determine the initial choice of analysis. The present paper reviews some of these models, and argues that the assumptions concerning their underlying computational efficiency are flawed. Experimental evidence is presented which is suggestive of the alternative interactive account.


Quarterly Journal of Experimental Psychology | 2011

Looking at anything that is green when hearing “frog”: How object surface colour and stored object colour knowledge influence language-mediated overt attention

Falk Huettig; Gerry T. M. Altmann

Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., “spinach”; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.

Collaboration


Dive into the Gerry T. M. Altmann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge