Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver Layton is active.

Publication


Featured researches published by Oliver Layton.


Journal of Neurophysiology | 2016

The temporal dynamics of heading perception in the presence of moving objects.

Oliver Layton; Brett R. Fajen

Many forms of locomotion rely on the ability to accurately perceive ones direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observers future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observers path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models.


Journal of Vision | 2016

Sources of bias in the perception of heading in the presence of moving objects: Object-based and border-based discrepancies.

Oliver Layton; Brett R. Fajen

The focus of expansion (FoE) specifies the heading direction of an observer during self-motion, and experiments show that humans can accurately perceive their heading from optic flow. However, when the environment contains an independently moving object, heading judgments may be biased. When objects approach the observer in depth, the heading bias may be due to discrepant optic flow within the contours of the object that radiates from a secondary FoE (object-based discrepancy) or by motion contrast at the borders of the object (border-based discrepancy). In Experiments 1 and 2, we manipulated the objects path angle and distance from the observer to test whether the heading bias induced by moving objects is entirely due to object-based discrepancies. The results showed consistent bias even at large path angles and when the object moved far in depth, which is difficult to reconcile with the influence of discrepant optic flow within the object. In Experiment 3, we found strong evidence that the misperception of heading can also result from a specific border-based discrepancy (pseudo FoE) that emerges from the relative motion between the object and background at the trailing edge of the object. Taken together, the results from the present study support the idea that when moving objects are present, heading perception is biased in some conditions by discrepant optic flow within the contours of the object and in other conditions by motion contrast at the border (the pseudo FoE). Center-weighted spatial pooling mechanisms in MSTd may account for both effects.


The Journal of Neuroscience | 2016

A Neural Model of MST and MT Explains Perceived Object Motion during Self-Motion.

Oliver Layton; Brett R. Fajen

When a moving object cuts in front of a moving observer at a 90° angle, the observer correctly perceives that the object is traveling along a perpendicular path just as if viewing the moving object from a stationary vantage point. Although the observers own (self-)motion affects the objects pattern of motion on the retina, the visual system is able to factor out the influence of self-motion and recover the world-relative motion of the object (Matsumiya and Ando, 2009). This is achieved by using information in global optic flow (Rushton and Warren, 2005; Warren and Rushton, 2009; Fajen and Matthis, 2013) and other sensory arrays (Dupin and Wexler, 2013; Fajen et al., 2013; Dokka et al., 2015) to estimate and deduct the component of the objects local retinal motion that is due to self-motion. However, this account (known as “flow parsing”) is qualitative and does not shed light on mechanisms in the visual system that recover object motion during self-motion. We present a simple computational account that makes explicit possible mechanisms in visual cortex by which self-motion signals in the medial superior temporal area interact with object motion signals in the middle temporal area to transform object motion into a world-relative reference frame. The model (1) relies on two mechanisms (MST-MT feedback and disinhibition of opponent motion signals in MT) to explain existing data, (2) clarifies how pathways for self-motion and object-motion perception interact, and (3) unifies the existing flow parsing hypothesis with established neurophysiological mechanisms. SIGNIFICANCE STATEMENT To intercept targets, we must perceive the motion of objects that move independently from us as we move through the environment. Although our self-motion substantially alters the motion of objects on the retina, compelling evidence indicates that the visual system at least partially compensates for self-motion such that object motion relative to the stationary environment can be more accurately perceived. We have developed a model that sheds light on plausible mechanisms within the visual system that transform retinal motion into a world-relative reference frame. Our model reveals how local motion signals (generated through interactions within the middle temporal area) and global motion signals (feedback from the dorsal medial superior temporal area) contribute and offers a new hypothesis about the connection between pathways for heading and object motion perception.


PLOS Computational Biology | 2016

Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow

Oliver Layton; Brett R. Fajen

Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.


Journal of Vision | 2017

Possible role for recurrent interactions between expansion and contraction cells in MSTd during self-motion perception in dynamic environments

Oliver Layton; Brett R. Fajen

Cortical area MSTd contains cells sensitive to the radial expansion and contraction motion patterns experienced during forward and backward self-motion. We investigated the open question of whether populations of MSTd cells tuned to expansion and contraction interact through recurrent connectivity, which may play important roles in postural control and resolving heading in dynamic environments. We used a neural model of MSTd to generate predictions about the consequences of different types of interactions among MSTd expansion and contraction cells for heading signals produced in the case of self-motion in the presence of a retreating object-a stimulus that recruits both expansion and contraction MSTd cell populations. Human heading judgments from a psychophysical experiment that we conducted were consistent only with the MSTd model that contained recurrent connectivity within and between expansion and contraction cell populations. The model and human heading judgments were biased in the direction of the object motion when the object crossed the observers future path and biased in the opposite direction when the object did not cross the path. We conclude that recurrent interactions among expansion and contraction cells in MSTd provide a plausible mechanism to support robust self-motion through dynamic environments.


Journal of Vision | 2015

The robustness and stability of heading perception in dynamic environments

Oliver Layton; Brett R. Fajen

Several studies have revealed that human heading perception based on optic flow is biased when independently moving objects (IMOs) cross or approach the observers future path. However, these biases are surprisingly weak (~2°) and perceived heading does not seem to abruptly shift at the moment that a moving object crosses the observers future path. While previous studies have focused on biases, it is equally important to understand how heading perception is as robust and stable as it is. Such robustness and stability is surprising given that IMOs often occupy large portions of the visual field and occlude the focus of expansion (FoE). Why isnt heading perception, which is based on neurons tuned to radial expansion, biased by more than several degrees or abruptly shift when an object crosses the future path? Indeed, our simulations of existing models (differential motion and center-weighted template models) yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning (2012) that explains how the visual system reliably estimates heading during navigation through dynamic environments. Unlike existing models, competitive dynamics between units in MSTd stabilize the models heading estimate over time, even when an IMO crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. The model explains the surprising bias that occurs when an object disoccludes the future path, although the FoE is visible in the flow field (Layton & Fajen, Submitted to JoV). Our findings support competitive temporal dynamics as a crucial mechanism underlying the robustness of perception of heading. Meeting abstract presented at VSS 2015.


Journal of Vision | 2016

A neural model of MST and MT explains perceived object motion during self-motion

Oliver Layton; Brett R. Fajen


Journal of Vision | 2017

A model of optic flow parsing as error in prediction

Oliver Layton; Brett R. Fajen


Journal of Vision | 2017

Slowed optic flow is used to perceive object motion during active locomotion

Howard S. Hock; Oliver Layton; Adar Pelah


Journal of Vision | 2017

Choosing actions that maintain sprint ability during repeated target interception tasks

Nathaniel Powell; Scott Steinmetz; Oliver Layton; Brett R. Fajen

Collaboration


Dive into the Oliver Layton's collaboration.

Top Co-Authors

Avatar

Brett R. Fajen

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Nathaniel Powell

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Scott Steinmetz

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Howard S. Hock

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Robert Wild

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge