Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Tong is active.

Publication


Featured researches published by Matthew Tong.


Journal of Vision | 2017

Control of gaze while walking: Task structure, reward, and uncertainty.

Matthew Tong; Oran Zohar; Mary Hayhoe

While it is universally acknowledged that both bottom up and top down factors contribute to allocation of gaze, we currently have limited understanding of how top-down factors determine gaze choices in the context of ongoing natural behavior. One purely top-down model by Sprague, Ballard, and Robinson (2007) suggests that natural behaviors can be understood in terms of simple component behaviors, or modules, that are executed according to their reward value, with gaze targets chosen in order to reduce uncertainty about the particular world state needed to execute those behaviors. We explore the plausibility of the central claims of this approach in the context of a task where subjects walk through a virtual environment performing interceptions, avoidance, and path following. Many aspects of both walking direction choices and gaze allocation are consistent with this approach. Subjects use gaze to reduce uncertainty for task-relevant information that is used to inform action choices. Notably the addition of motion to peripheral objects did not affect fixations when the objects were irrelevant to the task, suggesting that stimulus saliency was not a major factor in gaze allocation. The modular approach of independent component behaviors is consistent with the main aspects of performance, but there were a number of deviations suggesting that modules interact. Thus the model forms a useful, but incomplete, starting point for understanding top-down factors in active behavior.


Journal of Vision | 2016

Memory and visual search in naturalistic 2D and 3D environments

Chia Ling Li; M. Pilar Aivar; Dmitry Kit; Matthew Tong; Mary Hayhoe

The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.


Journal of Vision | 2015

Modeling Task Control of Gaze.

Matthew Tong; Shun Zhang; Leif Johnson; Dana H. Ballard; Mary Hayhoe

Natural behavior involves sequences of gaze changes that serve behavioral goals. A body of evidence suggests that eye-movement targeting is controlled by a priority map that is influenced by the stimulus and a variety of top down factors, including subjective value. However, it is not known how such maps evolve over time to guide attention and gaze from one target to the next. We take the approach of decomposing behavior into a sequence of sub-tasks, where gaze is allocated to gather specific information for a sub-task, such as location of an obstacle to be avoided. We examined behavior in a virtual environment where subjects walk along a path, collect targets, and avoid obstacles. We manipulated relative importance of the tasks using different instructions, and manipulated uncertainty about object location by adding random motion to the objects (Tong & Hayhoe, 2013). We adapted a soft barrier model previously developed by Johnson et al (2014). This model is similar to a random walk, with two parameters that reflect the rate of growth of uncertainty and the priority of a particular sub-task. Different sub-tasks compete for gaze, and a location is likely to be chosen as a gaze target if it important and its location is very uncertain. We used estimates of the priority values that were consistent with subjective values of different sub-tasks recovered from walking behavior using Inverse Reinforcement Learning, and estimated the growth of uncertainty over time. We were able to predict the proportion of time spent on the path, obstacles, and targets in the environment, as well as the effect of added uncertainty about object location. This supports the claim that, in natural behavior, the next target for gaze is determined by both the subjective value of the behavior, and by its information needs. Meeting abstract presented at VSS 2015.


PLOS Computational Biology | 2018

Modeling sensory-motor decisions in natural behavior

Ruohan Zhang; Shun Zhang; Matthew Tong; Yuchen Cui; Constantin A. Rothkopf; Dana H. Ballard; Mary Hayhoe

Although a standard reinforcement learning model can capture many aspects of reward-seeking behaviors, it may not be practical for modeling human natural behaviors because of the richness of dynamic environments and limitations in cognitive resources. We propose a modular reinforcement learning model that addresses these factors. Based on this model, a modular inverse reinforcement learning algorithm is developed to estimate both the rewards and discount factors from human behavioral data, which allows predictions of human navigation behaviors in virtual reality with high accuracy across different subjects and with different tasks. Complex human navigation trajectories in novel environments can be reproduced by an artificial agent that is based on the modular model. This model provides a strategy for estimating the subjective value of actions and how they influence sensory-motor decisions in natural behavior.


Journal of Vision | 2015

Memory in visual search is task-dependent in both 2D and 3D environments

Chia-Ling Li; M. Pilar Aivar; Matthew Tong; Mary Hayhoe

Previous studies have indicated the effect of memory for both context and targets on search in 2D images of naturalistic scenes. However, recent results in 3D immersive environments failed to show much effect of context (Li et al., JOV, 2014). To examine whether this reflects differences between 2D vs. 3D environments, we ran a 2D experiment designed to parallel our previous 3D virtual reality environment. Subjects viewed 2D snapshots taken from the two rooms in the 3D immersive environment and then searched those images for a series of targets. The number of fixations required to locate the targets improved rapidly and was similar in both 2D and 3D environments. Interestingly, most of the improvement reflects learning to choose the correct room to look for a given target. Once in the correct room, search is very rapid and objects were located within 3-5 fixations in either environment. Previous exposure (one minute) to the context did not facilitate subsequent search. This was true for both 2D and 3D. In addition, there was little or no effect of experience with the environment on subsequent search for contextual objects in the scene. Even after 24 search trials, the number of fixations required to locate contextual objects in the room was close to values found with no experience. Incidental fixations made during previous trials also do not seem to benefit search much (though a small effect is detectable). Thus, search in both 2D and 3D environments is very comparable, and the primary effect of experience on search depends on task relevance (i.e., previously searched objects are easily remembered but not otherwise). We speculate that the effects of context either require much more extensive experience, or else a pre-exposure that immediately precedes the search episode. Meeting abstract presented at VSS 2015.


Journal of Vision | 2015

Spatial memory relative to the 3D environment guides body orientation in visual search.

M. Pilar Aivar; Chia-Ling Li; Dmitry Kit; Matthew Tong; Mary Hayhoe

Measurement of eye movements has revealed rapid development of memory for object locations in 3D immersive environments. To examine the nature of that representation, and to see if memory is coded with respect to the 3D coordinates of the room, head position was recorded while participants performed a visual search task in an immersive virtual reality apartment. The apartment had two rooms, connected by a corridor. Participants searched the apartment for a series of geometric target objects. Some target objects were always placed at the same location (stable objects), while others appeared at a new location in each trial (random objects). We analyzed whether body movements showed changes that reflected memory for target location. In each trial we calculated how far the participants trajectory deviated from a straight path to the target object. Changes in head orientation from the moment the room was entered to the moment the target was reached were also computed. We found that the average deviation from the straight path was larger and more variable for random target objects (.47 vs .31 meters). Also the point of maximum deviation from the straight path occurred earlier for random objects than for stable objects (at 42% vs 52% of the total trajectory). On room entry lateral head deviation from the room center was already bigger for stable objects than for random objects (18º vs. 10º). Thus for random objects participants move to the center of the room until the target is located, while for stable objects subjects are more likely to follow a straight trajectory from first entry. We conclude that memory for target location is coded with respect to room coordinates and is revealed by body orientation at first entry. The visually guided component of search seems to be relatively unimportant or occurs very quickly upon entry. Meeting abstract presented at VSS 2015.


Journal of Vision | 2015

Objects in the peripheral visual field influence gaze location in natural vision

Elena Hitzel; Matthew Tong; Alexander C. Schütz; Mary Hayhoe


Archive | 2017

Multitask Human Navigation in VR with Motion Tracking

Matthew Tong; Mary Hayhoe; Oran Zohar; Ruohan Zhang; Dana H. Ballard; Shun Zhang


Journal of Vision | 2017

Visual search in large-scale spaces: Spatial memory and head movements

Chia-Ling Li; M. Aivar; Matthew Tong; Mary Hayhoe


Journal of Vision | 2016

Acquisition and persistence of location information over the time course of natural actions.

M. Pilar Aivar; Chia-Ling Li; Matthew Tong; Dmitry Kit; Mary Hayhoe

Collaboration


Dive into the Matthew Tong's collaboration.

Top Co-Authors

Avatar

Mary Hayhoe

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Chia-Ling Li

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

M. Pilar Aivar

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dana H. Ballard

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Shun Zhang

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Oran Zohar

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Ruohan Zhang

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Chia Ling Li

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Leif Johnson

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge