Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nonie Finlayson is active.

Publication


Featured researches published by Nonie Finlayson.


Journal of Vision | 2013

Segmentation by depth does not always facilitate visual search

Nonie Finlayson; Roger W. Remington; James D. Retell; Philip M. Grove

In visual search, target detection times are relatively insensitive to set size when targets and distractors differ on a single feature dimension. Search can be confined to only those elements sharing a single feature, such as color (Egeth, Virzi, & Garbart, 1984). These findings have been taken as evidence that elementary feature dimensions support a parallel segmentation of a scene into discrete sets of items. Here we explored if relative depth (signaled by binocular disparity) could support a similar parallel segmentation by examining the effects of distributing distracting elements across two depth planes. Three important empirical findings emerged. First, when the target was a feature singleton on the target depth plane, but a conjunction search among distractors on the nontarget plane, search efficiency increased compared to a single depth plane. Second, benefits of segmentation in depth were only observed when the target depth plane was known in advance. Third, no benefit of segmentation in depth was observed when both planes required a conjunction search, even with prior knowledge of the target depth plane. Overall, the benefit of distributing the elements of a search set across two depth planes was observed only when the two planes differed both in binocular disparity and in the elementary feature composition of individual elements. We conclude that segmentation of the search array into two depth planes can facilitate visual search, but unlike color or other elementary properties, does not provide an automatic, preattentive segmentation.


Perception | 2014

The effect of stimulus size on stereoscopic fusion limits and response criteria.

Philip M. Grove; Nonie Finlayson; Hiroshi Ono

The stereoscopic fusion limit denotes the largest binocular disparity for which a single fused image is perceived. Several criteria can be employed when judging whether or not a stereoscopic display is fused, and this may be a factor contributing to a discrepancy in the literature. Schor, Wood, and Ogawa (1984 Vision Research, 24, 661–665) reported that fusion limits did not change as a function of bar width, while Roumes, Plantier, Menu, and Thorpe (1997 Human Factors, 39, 359–373) reported higher fusion limits for larger stimuli than for smaller stimuli. Our investigation suggests that differing criteria between the studies could contribute to this discrepancy. In experiment 1 we measured horizontal and vertical disparity fusion limits for thin bars and for the edge of an extended surface, allowing observers to use the criterion of either diplopia or rivalry when evaluating fusion for all stimuli. Fusion limits were equal for thin bars and extended surfaces in both horizontal and vertical disparity conditions. We next measured fusion limits for a range of bar widths and instructed observers to indicate which criterion they employed on each trial. Fusion limits were constant across all stimulus widths. However, there was a sharp change in criterion from diplopia to rivalry when the angular extent of the bar width exceeded about twice the fusion limit, expressed in angular terms. We conclude that stereoscopic fusion limits do not depend on stimulus size in this context, but the criterion for fusion does. Therefore, the criterion for fusion should be clearly defined in any study measuring stereoscopic fusion limits.


Vision Research | 2016

Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth

Nonie Finlayson; Julie Golomb

A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features, such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information - not position-in-depth - seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location.


NeuroImage | 2017

Differential patterns of 2D location versus depth decoding along the visual hierarchy.

Nonie Finlayson; Xiaoli Zhang; Julie Golomb

ABSTRACT Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi‐voxel pattern analysis to investigate the relationship between 2D location and position‐in‐depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position‐in‐depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location‐tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D‐dominant to balanced 3D (2D and depth) along the visual hierarchy.


bioRxiv | 2018

Population receptive field estimates for motion-defined stimuli

Anna E Hughes; John Greenwood; Nonie Finlayson; D. Samuel Schwarzkopf

The processing of motion changes throughout the visual hierarchy, from spatially restricted ‘local motion’ in early visual cortex to more complex large-field ‘global motion’ at later stages. Here we used functional magnetic resonance imaging (fMRI) to examine spatially selective responses in these areas related to the processing of random-dot stimuli defined by differences in motion. We used population receptive field (pRF) analyses to map retinotopic cortex using bar stimuli comprising coherently moving dots. In the first experiment, we used three separate background conditions: no background dots (dot-defined bar-only), dots moving coherently in the opposite direction to the bar (kinetic boundary) and dots moving incoherently in random directions (global motion). Clear retinotopic maps were obtained for the bar-only and kinetic-boundary conditions across visual areas V1-V3 and in higher dorsal areas. For the global-motion condition, retinotopic maps were much weaker in early areas and became clear only in higher areas, consistent with the emergence of global-motion processing throughout the visual hierarchy. However, in a second experiment we demonstrate that this pattern is not specific to motion-defined stimuli, with very similar results for a transparent-motion stimulus and a bar defined by a static low-level property (dot size) that should have driven responses particularly in V1. We further exclude explanations based on stimulus visibility by demonstrating that the observed differences in pRF properties do not follow the ability of observers to localise or attend to these bar elements. Rather, our findings indicate that dorsal extrastriate retinotopic maps may primarily be determined by the visibility of the neural responses to the bar relative to the background response (i.e. neural signal-to-noise ratios) and suggests that claims about stimulus selectivity from pRF experiments must be interpreted with caution.


Visual Cognition | 2017

2D location biases depth-from-disparity judgments but not vice versa

Nonie Finlayson; Julie Golomb

ABSTRACT Visual cognition in our 3D world requires understanding how we accurately localize objects in 2D and depth, and what influence both types of location information have on visual processing. Spatial location is known to play a special role in visual processing, but most of these findings have focused on the special role of 2D location. One such phenomena is the spatial congruency bias, where 2D location biases judgments of object features but features do not bias location judgments. This paradigm has recently been used to compare different types of location information in terms of how much they bias different types of features. Here we used this paradigm to ask a related question: whether 2D and depth-from-disparity location bias localization judgments for each other. We found that presenting two objects in the same 2D location biased position-in-depth judgments, but presenting two objects at the same depth (disparity) did not bias 2D location judgments. We conclude that an object’s 2D location may be automatically incorporated into perception of its depth location, but not vice versa, which is consistent with a fundamentally special role for 2D location in visual processing.


Visual Cognition | 2015

The representation and perception of 3D space: Interactions between 2D location and depth

Nonie Finlayson; Xiaoli Zhang; Julie Golomb

ABSTRACT We live in a 3D world, and yet the majority of vision research is restricted to 2D phenomena, with depth research typically treated as a separate field. Here we ask whether 2D spatial information and depth information interact to form neural representations of 3D space, and if so, what are the perceptual implications? Using fMRI and behavioural methods, we reveal that human visual cortex gradually transitions from 2D to 3D spatial representations, with depth information emerging later along the visual hierarchy, and demonstrate that 2D location holds a fundamentally special place in early visual processing.


Journal of Vision | 2015

Human visual cortex gradually transitions from 2D to 3D spatial representations.

Nonie Finlayson; Julie Golomb

We live in a 3D world, and yet the majority of vision research is restricted to 2D phenomena. Previous research has shown that neural representations of 2D visual space are present throughout visual cortex. Many of these visual areas are also known to be sensitive to depth information (including V3, V3A, V3B/KO, V7, LO, and MT) - how does this depth information interact with 2D spatial information? Using fMRI and multi-voxel pattern analysis, we investigated the relationship between horizontal (x), vertical (y), and depth (z) representations in the brain. Participants viewed random dot stereograms with red/green anaglyph glasses. Eight different locations were stimulated in a blocked design: each location was defined by x, y, and z location (left or right, above or below, and in front or behind the fixation cross). The patterns of activation for each of the x, y, and z location conditions were compared across the brain with a searchlight analysis and within functionally localized ROIs. As expected, both x and y location information was present all along the visual pathways, with x information outperforming y information in higher visual areas. Importantly, while only 2D location information could be decoded in early visual cortex, all three types of location information could be decoded in several higher visual cortex regions. Moreover, this pattern seemed to emerge gradually: we found opposite trends for y and z location information, with y information decreasing and z information increasing along the visual hierarchy (both dorsal and ventral streams). In addition, we found that representations of depth are dependent on x location, but tolerant of changes in y location. We conclude that what begins as purely 2D spatial information in early visual areas gradually transitions to 3D spatial representations higher along the visual hierarchy. Meeting abstract presented at VSS 2015.


Journal of Vision | 2015

Topographic maps of depth in human visual cortex

Daniel Berman; Nonie Finlayson; Julie Golomb

Depth is a frequently overlooked aspect in vision research, despite the fact that recognizing and perceiving depth cues are essential when it comes to appropriately interacting with our surroundings. Behavioral and physiological studies have provided a solid framework for understanding depth perception, but we have yet to establish the precise neural organization of depth representation in human visual cortex. Here, we map depth representations using a phase-encoded stimulus that travels along the z-axis (depth), analogous to the standard 2D retinotopic mapping paradigm (wedges and rings; Engel et. al., 1994; Sereno et. al., 1995). Our stimulus was a large 2D patch filled with black and white moving dots. The stimulus moved forwards or backwards (in alternating runs) through 13 discrete depth planes, completing a full cycle every 28 seconds. Red/green anaglyph glasses were used to achieve depth perception while in the scanner. Using a standard phase-encoded cross-correlation approach, we found voxels selective for different depth planes in several intermediate and later visual areas. Preliminary results suggest that depth representations may be organized into a large-scale map-like representation across V3d, V3A, V3B, and V7. Our findings provide critical insights regarding the neural correlates of depth representations, and we detail for the first time depth maps in human visual cortex using fMRI. Meeting abstract presented at VSS 2015.


I-perception | 2011

The Effect of Stimulus Size on Stereoscopic Fusion Limits and Response Criteria

Philip M. Grove; Nonie Finlayson; Hiroshi Ono

Collaboration


Dive into the Nonie Finlayson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Greenwood

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge