Ian M. Thornton
University of Malta
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ian M. Thornton.
Visual Cognition | 2000
Diego Fernandez-Duque; Ian M. Thornton
Evidence from many different paradigms (e.g. change blindness, inattentional blindness, transsaccadic integration) indicate that observers are often very poor at reporting changes to their visual environment. Such evidence has been used to suggest that the spatio-temporal coherence needed to represent change can only occur in the presence of focused attention. In four experiments we use modified change blindness tasks to demonstrate (a) that sensitivity to change does occur in the absence of awareness, and (b) this sensitivity does not rely on the redeployment of attention. We discuss these results in relation to theories of scene perception, and propose a reinterpretation of the role of attention in representing change.
Vision Research | 2003
B Knappmeyer; Ian M. Thornton; Hh Bülthoff
Previous research has shown that facial motion can carry information about age, gender, emotion and, at least to some extent, identity. By combining recent computer animation techniques with psychophysical methods, we show that during the computation of identity the human face recognition system integrates both types of information: individual non-rigid facial motion and individual facial form. This has important implications for cognitive and neural models of face perception, which currently emphasize a separation between the processing of invariant aspects (facial form) and changeable aspects (facial motion) of faces.
Cognitive Neuropsychology | 1998
Ian M. Thornton; Jeannine Pinto; Maggie Shiffrar
To function adeptly within our environment, we must perceive and interpret the movements of others. What mechanisms underlie our exquisite visual sensitivity to human m ovement? To address this question, a set of psychophysical studies was conducted to ascertain the temporal characteristics of the visual perception of human locomotion. Subjects viewed a computer-generated point-light walker presented within a mask under conditions of apparent motion. The temporal delay between the display frames as well as the motion characteristics of the mask were varied. With sufficiently long trial durations, performance in a direction discrimination task remained fairly constant across inter-stimulus interval (ISI) when the walker was presented within a random motion mask but increased with ISI when the mask motion duplicated the motion of the walker. This pattern of results suggests that both low-level and high-level visual analyses are involved in the visual perception of human locomotion. These findings are discussed in relation to recent neurophysiological data suggesting that the visual perception of human movement may involve a functional linkage between the visual and motor systems.
Journal of Cognitive Neuroscience | 2003
Diego Fernandez-Duque; Giordana Grossi; Ian M. Thornton; Helen J. Neville
Awareness of change within a visual scene only occurs in the presence of focused attention. When two versions of a complex scene are presented in alternating sequence separated by a blank mask, unattended changes usually remain undetected, although they may be represented implicitly. To test whether awareness of change and focused attention had the same or separable neurophysiological substrates, and to search for the neural substrates of implicit representation of change, we recorded event-related brain potentials (ERPs) during a change blindness task. Relative to active search, focusing attention in the absence of a change enhanced an ERP component over frontal sites around 100300 msec after stimulus onset, and in posterior sites at the 150300 msec window. Focusing attention to the location of a change that subjects were aware of replicated those attentional effects, but also produced a unique positive deflection in the 350600 msec window, broadly distributed with its epicenter in mediocentral areas. The unique topography and time course of this latter modulation, together with its dependence on the aware perception of change, distinguishes this awareness of change electrophysiological response from the electrophysiological effects of focused attention. Finally, implicit representation of change elicited a distinct electrophysiological event: Unaware changes triggered a positive deflection at the 240300 msec window, relative to trials with no change. Overall, the present data suggest that attention, awareness of change, and implicit representation of change may be mediated by separate underlying systems.
Perception | 2002
Ian M. Thornton; Zoe Kourtzi
In a series of three experiments, we used a sequential matching task to explore the impact of non-rigid facial motion on the perception of human faces. Dynamic prime images, in the form of short video sequences, facilitated matching responses relative to a single static prime image. This advantage was observed whenever the prime and target showed the same face but an identity match was required across expression (experiment 1) or view (experiment 2). No facilitation was observed for identical dynamic prime sequences when the matching dimension was shifted from identity to expression (experiment 3). We suggest that the observed dynamic advantage, the first reported for non-degraded facial images, arises because the matching task places more emphasis on visual working memory than typical face recognition tasks. More specifically, we believe that representational mechanisms optimised for the processing of motion and/or change-over-time are established and maintained in working memory and that such ‘dynamic representations’ (Freyd, 1987 Psychological Review 94 427–438) capitalise on the increased information content of the dynamic primes to enhance performance.
Experimental Brain Research | 2006
Ks Pilz; Ian M. Thornton; Hh Bülthoff
Recently there has been growing interest in the role that motion might play in the perception and representation of facial identity. Most studies have considered old/new recognition as a task. However, especially for non-rigid motion, these studies have often produced contradictory results. Here, we used a delayed visual search paradigm to explore how learning is affected by non-rigid facial motion. In the current studies we trained observers on two frontal view faces, one moving non-rigidly, the other a static picture. After a delay, observers were asked to identify the targets in static search arrays containing 2, 4 or 6 faces. On a given trial target and distractor faces could be shown in one of five viewpoints, frontal, 22° or 45° to the left or right. We found that familiarizing observers with dynamic faces led to a constant reaction time advantage across all setsizes and viewpoints compared to static familiarization. This suggests that non-rigid motion affects identity decisions even across extended periods of time and changes in viewpoint. Furthermore, it seems as if such effects may be difficult to observe using more traditional old/new recognition tasks.
Spatial Vision | 2002
Ian M. Thornton
There have been many previous reports of mislocalization associated with moving objects (e.g. flash-lag effect, Frohlich effect, representational momentum). Across four experiments, a new form of mislocalization--the onset repulsion effect (ORE)--is explored in which the error is always back along the observed path of motion. That is, when observers are asked to localize both the initial onset and the final offset positions of a moving object, by far the largest and most systematic error they make is in placing the onset point too early along the correct path of motion. Errors orthogonal to the path of motion and errors in localizing the offset point are minimal by comparison. Errors are also very small when motion is implied rather than continuous. The ORE can be observed with and without fixation, and as with other mislocalization effects, shows some dependence on direction and velocity. As the most obvious prediction in these studies, based on previous reports of mislocalization and the known properties of the visual system, would be for forward rather than backward errors, discussion will focus on the type of mechanism that may have given rise to the observed pattern of results.
Journal of Experimental Psychology: Human Perception and Performance | 2003
Diego Fernandez-Duque; Ian M. Thornton
Several recent findings support the notion that changes in the environment can be implicitly represented by the visual system. S. R. Mitroff, D. J. Simons, and S. L. Franconeri (see record 2002-15293-003) challenged this view and proposed alternative interpretations based on explicit strategies. Across 4 experiments, the current study finds no empirical support for such alternative proposals. Experiment 1 shows that subjects do not rely on unchanged items when locating an unaware change. Experiments 2 and 3 show that unaware changes affect performance even when they occur at an unpredictable location. Experiment 4 shows that the unaware congruency effect does not depend simply on the pattern of the final display. The authors point to converging evidence from other methodologies and highlight several weaknesses in Mitroff et als theoretical arguments. It is concluded here that implicit representation of change provides the most parsimonious explanation for both past and present findings. ((c) 2003 APA, all rights reserved)
Spatial Vision | 2001
Ian M. Thornton; Diego Fernandez-Duque
Several paradigms (e.g. change blindness, inattentional blindness, transsaccadic integration) indicate that observers are often very poor at reporting changes to their visual environment. Such evidence has been used to suggest that the spatio-temporal coherence needed to represent change can only occur in the presence of focused attention. However, those studies almost always rely on explicit reports. It remains a possibility that the visual system can implicitly detect change, but that in the absence of focused attention, the change does not reach awareness and consequently is not reported. To test this possibility, we used a simple change detection paradigm coupled with a speeded orientation discrimination task. Even when observers reported being unaware of a change in an items orientation, its final orientation effectively biased their response in the orientation discrimination task. Both in aware and unaware trials, errors were most frequent when the changed item and the probe had incongruent orientations. These results demonstrate that the nature of the change can be represented in the absence of awareness.
IEEE Transactions on Visualization and Computer Graphics | 2006
Min Chen; R.R. Hashim; Ralf P. Botchen; Daniel Weiskopf; Thomas Ertl; Ian M. Thornton
Video visualization is a computation process that extracts meaningful information from original video data sets and conveys the extracted information to users in appropriate visual representations. This paper presents a broad treatment of the subject, following a typical research pipeline involving concept formulation, system development, a path-finding user study, and a field trial with real application data. In particular, we have conducted a fundamental study on the visualization of motion events in videos. We have, for the first time, deployed flow visualization techniques in video visualization. We have compared the effectiveness of different abstract visual representations of videos. We have conducted a user study to examine whether users are able to learn to recognize visual signatures of motions, and to assist in the evaluation of different visualization techniques. We have applied our understanding and the developed techniques to a set of application video clips. Our study has demonstrated that video visualization is both technically feasible and cost-effective. It has provided the first set of evidence confirming that ordinary users can be accustomed to the visual features depicted in video visualizations, and can learn to recognize visual signatures of a variety of motion events