Yuki Kamide
University of Dundee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuki Kamide.
Journal of Memory and Language | 2003
Yuki Kamide; Gerry T. M. Altmann; Sarah L. Haywood
Three eye-tracking experiments using the ‘visual-world’ paradigm are described that explore the basis by which thematic dependencies can be evaluated in advance of linguistic input that unambiguously signals those dependencies. Following Altmann and Kamide (1999), who found that selectional information conveyed by a verb can be used to anticipate an upcoming Theme, we attempt to draw here a more precise picture of the basis for such anticipatory processing. Our data from two studies in English and one in Japanese suggest that (a) verb-based information is not limited to anticipating the immediately following (grammatical) object, but can also anticipate later occurring objects (e.g., Goals), (b) in combination with information conveyed by the verb, a pre-verbal argument (Agent) can constrain the anticipation of a subsequent Theme, and (c) in a head-final construction such as that typically found in Japanese, both syntactic and semantic constraints extracted from pre-verbal arguments can enable the anticipation, in effect, of a further forthcoming argument in the absence of their head (the verb). We suggest that such processing is the hallmark of an incremental processor that is able to draw on different sources of information (some non-linguistic) at the earliest possible opportunity to establish the fullest possible interpretation of the input at each moment in time.
Cognition | 2009
Gerry T. M. Altmann; Yuki Kamide
Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either ‘The woman will put the glass on the table’ or ‘The woman is too lazy to put the glass on the table’. Subsequently, with the scene unchanged, participants heard that the woman ‘will pick up the bottle, and pour the wine carefully into the glass.’ Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after ‘pour’ (anticipating the glass) and at ‘glass’ reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).
Language and Cognitive Processes | 1999
Yuki Kamide; Donald Mitchell
The present study addresses the question of whether structural analyses of verb-arguments are postponed up until the head verb has been processed (head-driven parsing accounts) or initiated prior to the appearance of the verb (pre-head attachment accounts). To explore this question in relation to a head-final language, a Japanese dative argument attachment ambiguity was examined in both a questionnaire study (Experiment 1) and a self-paced reading test (Experiment 2). The data suggested that the dative argument attachment ambiguity is resolved in the manner predicted by pre-head attachment accounts. The results were incompatible with most variants of the head-driven parsing model, and were not of the form currently predicted by constraint-satisfaction models. We end by discussing the general theoretical implications of the findings.
Language and Linguistics Compass | 2008
Yuki Kamide
Anticipation is an essential ability for the human cognitive system to survive in its surrounding environment. The present article will review previous research on anticipatory processes in sentence processing (comprehension). I start by pointing out past research carried out with inadequate methods, then move on to reviewing recent research with relatively new, more appropriate methods, specifically, the so-called ‘visual-world’ eye-tracking paradigm, and neuropsychological techniques. I then discuss remaining unresolved issues, both methodological and theoretical.
PLOS ONE | 2013
Shane Lindsay; Christoph Scheepers; Yuki Kamide
In describing motion events verbs of manner provide information about the speed of agents or objects in those events. We used eye tracking to investigate how inferences about this verb-associated speed of motion would influence the time course of attention to a visual scene that matched an event described in language. Eye movements were recorded as participants heard spoken sentences with verbs that implied a fast (“dash”) or slow (“dawdle”) movement of an agent towards a goal. These sentences were heard whilst participants concurrently looked at scenes depicting the agent and a path which led to the goal object. Our results indicate a mapping of events onto the visual scene consistent with participants mentally simulating the movement of the agent along the path towards the goal: when the verb implies a slow manner of motion, participants look more often and longer along the path to the goal; when the verb implies a fast manner of motion, participants tend to look earlier at the goal and less on the path. These results reveal that event comprehension in the presence of a visual world involves establishing and dynamically updating the locations of entities in response to linguistic descriptions of events.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2016
Yuki Kamide; Shane Lindsay; Christoph Scheepers; Anuenue Kukona
Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space.
Cognition | 1999
Gerry T. M. Altmann; Yuki Kamide
Journal of Memory and Language | 2007
Gerry T. M. Altmann; Yuki Kamide
Journal of Psycholinguistic Research | 2003
Yuki Kamide; Christoph Scheepers; Gerry T. M. Altmann
Archive | 2004
Gerry T. M. Altmann; Yuki Kamide