Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim J. Smith is active.

Publication


Featured researches published by Tim J. Smith.


Cognitive Computation | 2011

Clustering of Gaze During Dynamic Scene Viewing is Predicted by Motion

Parag K. Mital; Tim J. Smith; Robin L. Hill; John M. Henderson

Where does one attend when viewing dynamic scenes? Research into the factors influencing gaze location during static scene viewing have reported that low-level visual features contribute very little to gaze location especially when opposed by high-level factors such as viewing task. However, the inclusion of transient features such as motion in dynamic scenes may result in a greater influence of visual features on gaze allocation and coordination of gaze across viewers. In the present study, we investigated the contribution of low- to mid-level visual features to gaze location during free-viewing of a large dataset of videos ranging in content and length. Signal detection analysis on visual features and Gaussian Mixture Models for clustering gaze was used to identify the contribution of visual features to gaze location. The results show that mid-level visual features including corners and orientations can distinguish between actual gaze locations and a randomly sampled baseline. However, temporal features such as flicker, motion, and their respective contrasts were the most predictive of gaze location. Additionally, moments in which all viewers’ gaze tightly clustered in the same location could be predicted by motion. Motion and mid-level visual features may influence gaze allocation in dynamic scenes, but it is currently unclear whether this influence is involuntary or due to correlations with higher order factors such as scene semantics.


Psychological Review | 2010

CRISP: A computational model of fixation durations in scene viewing.

Antje Nuthmann; Tim J. Smith; Ralf Engbert; John M. Henderson

Eye-movement control during scene viewing can be represented as a series of individual decisions about where and when to move the eyes. While substantial behavioral and computational research has been devoted to investigating the placement of fixations in scenes, relatively little is known about the mechanisms that control fixation durations. Here, we propose a computational model (CRISP) that accounts for saccade timing and programming and thus for variations in fixation durations in scene viewing. First, timing signals are modeled as continuous-time random walks. Second, difficulties at the level of visual and cognitive processing can inhibit and thus modulate saccade timing. Inhibition generates moment-by-moment changes in the random walks transition rate and processing-related saccade cancellation. Third, saccade programming is completed in 2 stages: an initial, labile stage that is subject to cancellation and a subsequent, nonlabile stage. Several simulation studies tested the models adequacy and generality. An initial simulation study explored the role of cognitive factors in scene viewing by examining how fixation durations differed under different viewing task instructions. Additional simulations investigated the degree to which fixation durations were under direct moment-to-moment control of the current visual scene. The present work further supports the conclusion that fixation durations, to a certain degree, reflect perceptual and cognitive activity in scene viewing. Computational model simulations contribute to an understanding of the underlying processes of gaze control.


Psychological Science | 2009

Eye Movements and Visual Encoding During Scene Perception

Keith Rayner; Tim J. Smith; George L. Malcolm; John M. Henderson

The amount of time viewers could process a scene during eye fixations was varied by a mask that appeared at a certain point in each eye fixation. The scene did not reappear until the viewer made an eye movement. The main finding in the studies was that in order to normally process a scene, viewers needed to see the scene for at least 150 ms during each eye fixation. This result is surprising because viewers can extract the gist of a scene from a brief 40- to 100-ms exposure. It also stands in marked contrast to reading, as readers need only to view the words in the text for 50 to 60 ms to read normally. Thus, although the same neural mechanisms control eye movements in scene perception and reading, the cognitive processes associated with each task drive processing in different ways.


Journal of Vision | 2009

The influence of clutter on real-world scene search: Evidence from search efficiency and eye movements

John M. Henderson; Myriam Chanceaux; Tim J. Smith

We investigated the relationship between visual clutter and visual search in real-world scenes. Specifically, we investigated whether visual clutter, indexed by feature congestion, sub-band entropy, and edge density, correlates with search performance as assessed both by traditional behavioral measures (response time and error rate) and by eye movements. Our results demonstrate that clutter is related to search performance. These results hold for both traditional search measures and for eye movements. The results suggest that clutter may serve as an image-based proxy for search set size in real-world scenes.


Information Sciences | 2014

ECHOES: An intelligent serious game for fostering social communication in children with autism

Sara Bernardini; Kaśka Porayska-Pomsta; Tim J. Smith

This paper presents ECHOES, a serious game built to help young children with autism spectrum conditions practice social communication skills. We focus on the design and implementation of the interactive learning activities, which take place in a two-dimensional sensory garden, and the autonomous virtual agent, which acts as a credible social partner to children with autism. Both the activities and the agent are based on principles of best autism practice and input from users. Specification guidelines are given for building an autonomous socially competent agent that supports learning in this context. We present experimental results pertaining to the effectiveness of the agent based on an extensive evaluation of the ECHOES platform, which show encouraging tendencies for a number of children.


ubiquitous computing | 2012

Developing technology for autism: an interdisciplinary approach

Kaśka Porayska-Pomsta; Christopher Frauenberger; Helen Pain; Gnanathusharan Rajendran; Tim J. Smith; Rachel Menzies; Mary Ellen Foster; Alyssa Alcorn; Sam Wass; S. Bernadini; Katerina Avramides; Wendy Keay-Bright; Jingying Chen; Annalu Waller; Karen Guldberg; Judith Good; Oliver Lemon

We present an interdisciplinary methodology for designing interactive multi-modal technology for young children with autism spectrum disorders (ASDs). In line with many other researchers in the field, we believe that the key to developing technology in this context is to embrace perspectives from diverse disciplines to arrive at a methodology that delivers satisfactory outcomes for all stakeholders. The ECHOES project provided us with the opportunity to develop a technology-enhanced learning (TEL) environment that facilitates acquisition and exploration of social skills by typically developing (TD) children and children with autism spectrum disorders (ASDs). ECHOES’ methodology and the learning environment rely crucially on multi-disciplinary expertise including developmental psychology, visual arts, human–computer interaction, artificial intelligence, education, and several other cognate disciplines. In this article, we reflect on the methods needed to develop a TEL environment for young users with ASDs by identifying key features, benefits, and challenges of this approach.


Current Directions in Psychological Science | 2012

A Window on Reality: Perceiving Edited Moving Images

Tim J. Smith; Daniel T. Levin; James E. Cutting

Edited moving images entertain, inform, and coerce us throughout our daily lives, yet until recently, the way people perceive movies has received little psychological attention. We review the history of empirical investigations into movie perception and the recent explosion of new research on the subject using methods such as behavioral experiments, functional magnetic resonance imagery (fMRI) eye tracking, and statistical corpus analysis. The Hollywood style of moviemaking, which permeates a wide range of visual media, has evolved formal conventions that are compatible with the natural dynamics of attention and humans’ assumptions about continuity of space, time, and action. Identifying how people overcome the sensory differences between movies and reality provides an insight into how the same cognitive processes are used to perceive continuity in the real world.


Visual Cognition | 2009

How are eye fixation durations controlled during scene viewing? Further evidence from a scene onset delay paradigm

John M. Henderson; Tim J. Smith

Recent research on eye movements during scene viewing has focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. In two scene memorization and one visual search experiments, the scene was removed from view during critical fixations for a predetermined delay, and then restored following the delay. Experiment 1 compared filled (pattern mask) and unfilled (grey field) delays. Experiment 2 compared random to blocked delays. Experiment 3 extended the results to a visual search task. The results demonstrate that fixation durations in scene viewing comprise two fixation populations. One population remains relatively constant across delay, and the second population increases with scene onset delay. The results are consistent with a mixed eye movement control model that incorporates an autonomous control mechanism with process monitoring. The results suggest that a complete gaze control model will have to account for both fixation location and fixation duration.


Visual Cognition | 2009

Facilitation of return during scene viewing

Tim J. Smith; John M. Henderson

Inhibition of Return (IOR) is a delay in initiating attentional shifts to previously attended locations. It is believed to facilitate attentional exploration of a scene. Computational models of attention have implemented IOR as a simple mechanism for driving attention through a scene. However, evidence for IOR during scene viewing is inconclusive. In this study IOR during scene memorization and in response to sudden onsets at the last (1-back) and penultimate (2-back) fixation location was measured. The results indicate that there is a tendency for saccades to continue the trajectory of the last saccade (Saccadic Momentum), but contrary to the “foraging facilitator” hypothesis of IOR, there is also a distinct population of saccades directed back to the last fixation location, especially in response to onsets. Voluntary return saccades to the 1-back location experience temporal delay but this does not affect their likelihood of occurrence. No localized temporal delay is exhibited at 2-back. These results suggest that IOR exists at the last fixation location during scene memorization but that this temporal delay is overridden by Facilitation of Return. Computational models of attention will fail to capture the pattern of saccadic eye movements during scene viewing unless they model the dynamics of visual encoding and can account for the interaction between Facilitation of Return, Saccadic Momentum, and Inhibition of Return.


Behavior Research Methods | 2013

Parsing eye-tracking data of variable quality to provide accurate fixation duration estimates in infants and adults

Sam Wass; Tim J. Smith; Mark H. Johnson

Researchers studying infants’ spontaneous allocation of attention have traditionally relied on hand-coding infants’ direction of gaze from videos; these techniques have low temporal and spatial resolution and are labor intensive. Eye-tracking technology potentially allows for much more precise measurement of how attention is allocated at the subsecond scale, but a number of technical and methodological issues have given rise to caution about the quality and reliability of high temporal resolution data obtained from infants. We present analyses suggesting that when standard dispersal-based fixation detection algorithms are used to parse eye-tracking data obtained from infants, the results appear to be heavily influenced by interindividual variations in data quality. We discuss the causes of these artifacts, including fragmentary fixations arising from flickery or unreliable contact with the eyetracker and variable degrees of imprecision in reported position of gaze. We also present new algorithms designed to cope with these problems by including a number of new post hoc verification checks to identify and eliminate fixations that may be artifactual. We assess the results of our algorithms by testing their reliability using a variety of methods and on several data sets. We contend that, with appropriate data analysis methods, fixation duration can be a reliable and stable measure in infants. We conclude by discussing ways in which studying fixation durations during unconstrained orienting may offer insights into the relationship between attention and learning in naturalistic settings.

Collaboration


Dive into the Tim J. Smith's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rachel Wu

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Helen Pain

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sam Wass

University of Cambridge

View shared research outputs
Researchain Logo
Decentralizing Knowledge