Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sarah J. Harrison is active.

Publication


Featured researches published by Sarah J. Harrison.


Vision Research | 2010

Uninformative visual experience establishes long term perceptual bias.

Sarah J. Harrison; Benjamin T. Backus

Visual appearance depends upon the resolution of ambiguities that arise when 2D retinal images are interpreted as 3D scenes. This resolution may be characterized as a form of Bayesian perceptual inference, whereby retinal sense data combine with prior belief to yield an interpretation. Under this framework, the prior reflects environmental statistics, so an efficient system should learn by changing its prior after exposure to new statistics. We conjectured that a prior would only be modified when sense data contain disambiguating information, such that it is clear what bias is appropriate. This conjecture was tested by using a perceptually bistable stimulus, a rotating wire-frame cube, as a sensitive indicator of changes in the prior for 3D rotation direction, and by carefully matching perceptual experience of ambiguous and unambiguous versions of the stimulus across three groups of observers. We show for the first time that changes in the prior-observed as a change in bias that resists reverse learning the next day-is affected more by ambiguous stimuli than by disambiguated stimuli. Thus, contrary to our conjecture, modification of the prior occurred preferentially when the observer actively resolved ambiguity rather than when the observer was exposed to environmental contingencies. We propose that resolving stimuli that are not easily interpreted by existing visual rules must be a valid method for establishing useful perceptual biases in the natural world.


Vision Research | 2008

Within-texture collinearity improves human texture segmentation

Sarah J. Harrison; D.R.T. Keeble

Spatial arrangement has been shown to facilitate both detection of a threshold target by collinear flankers and detection of smooth chains within random arrays of suprathreshold elements. Here, we investigate the effect of alignment between texture elements on orientation-based texture segmentation. Textures composed of Gabor elements were used in a figure-discrimination task. The degree of collinearity within the texture was manipulated, and threshold figure-ground orientation differences found. A facilitative effect of collinearity on segmentation was seen, which was insensitive to Gabor carrier phase at the texture-element co-axial spacing of 3lambda used here. The pattern of results with respect to collinearity could not be attributed simply to improved linkage of local orientation contrast at figure borders in isolation, and instead suggests a role for the figure interior in texture segmentation.


Journal of Neurophysiology | 2015

Spatial specificity and inheritance of adaptation in human visual cortex

Jonas Larsson; Sarah J. Harrison

Adaptation at early stages of sensory processing can be propagated to downstream areas. Such inherited adaptation is a potential confound for functional magnetic resonance imaging (fMRI) techniques that use selectivity of adaptation to infer neuronal selectivity. However, the relative contributions of inherited and intrinsic adaptation at higher cortical stages, and the impact of inherited adaptation on downstream processing, remain unclear. Using fMRI, we investigated how adaptation to visual motion direction and orientation influences visually evoked responses in human V1 and extrastriate visual areas. To dissociate inherited from intrinsic adaptation, we quantified the spatial specificity of adaptation for each visual area as a measure of the receptive field sizes of the area where adaptation originated, predicting that adaptation originating in V1 should be more spatially specific than adaptation intrinsic to extrastriate visual cortex. In most extrastriate visual areas, the spatial specificity of adaptation did not differ from that in V1, suggesting that adaptation originated in V1. Only in one extrastriate area—MT—was the spatial specificity of direction-selective adaptation significantly broader than in V1, consistent with a combination of inherited V1 adaptation and intrinsic MT adaptation. Moreover, inherited adaptation effects could be both facilitatory and suppressive. These results suggest that adaptation at early visual processing stages can have widespread and profound effects on responses in extrastriate visual areas, placing important constraints on the use of fMRI adaptation techniques, while also demonstrating a general experimental strategy for systematically dissociating inherited from intrinsic adaptation by fMRI.


Journal of Vision | 2012

Associative learning of shape as a cue to appearance: A new demonstration of cue recruitment

Sarah J. Harrison; Benjamin T. Backus

The perceived rotation direction of a wire-frame Necker cube at stimulus onset can be conditioned to be dependent on retinal location (B. T. Backus & Q. Haijiang, 2007; S. J. Harrison & B. T. Backus, 2010a). This phenomenon was proposed to be an example of the visual system learning new cues to visual appearance, by adaptation in response to new experiences. Here, we demonstrate recruitment of a new cue, object shape, for the appearance of rotating 3D objects. The cue was established by interleaving ambiguous and disambiguated instances of two shapes, cubes and spheres, at the same retinal location. Disambiguated cubes and spheres rotated in opposite directions. A significant bias was consequently introduced in the resolution of ambiguity, whereby the proportions of ambiguous shapes perceived as rotating clockwise differed, in the direction predicted by their disambiguated counterparts. This finding suggests that training led the visual system to distinguish between the two shapes. The association of rotation direction and shape was only achieved when monocular depth cues were used to depict rotation in depth; shapes disambiguated by binocular disparity did not lead to recruitment of the shape cue. We speculate that this difference may be the consequence of a difference in the neural pathways by which the disambiguating cues act. This new instance of the cue recruitment effect opens possibilities for further generalization of the phenomenon.


Vision Research | 2009

Perceptual comparison of features within and between objects: a new look

Sarah J. Harrison; Jacob Feldman

The integration of spatially distinct elements into coherent objects is a fundamental process of vision. Yet notwithstanding an extensive literature on perceptual grouping, we still lack a clear understanding of the representational consequences of grouping disparate visual locations. We investigated this question in a feature comparison task; subjects identified matching features that belonged either to the same apparent object (within-object condition) or to different apparent objects (between-object condition). The stimulus was backward-masked at a variable SOA, to examine the consequences of changes in the perceptual organization of the segments over time. Critical to our aims, the two objects composing our stimulus were occluded to a variable extent, so that differences in within-object and between-object performance could be unequivocally related to the formation of objects. For certain stimulus arrangements, we found superior performance for within-object matches. The pattern of performance was, however, highly dependent on the stimulus orientation and was not related to the strength of the object percept. Using an oblique stimulus arrangement, we observed superior between-object comparisons that did vary with the object percept. We conclude that performance in our feature comparison task is strongly influenced by spatial relations between features that are independent of object properties. Indeed, this dominating effect may hide an underlying mechanism whereby formation of a visual object suppresses comparison of distinct features within the object.


Vision Research | 2011

Disambiguation of Necker cube rotation by monocular and binocular depth cues: Relative effectiveness for establishing long-term bias

Sarah J. Harrison; Benjamin T. Backus; Anshul Jain

The apparent direction of rotation of perceptually bistable wire-frame (Necker) cubes can be conditioned to depend on retinal location by interleaving their presentation with cubes that are disambiguated by depth cues (Haijiang, Saunders, Stone, & Backus, 2006; Harrison & Backus, 2010a). The long-term nature of the learned bias is demonstrated by resistance to counter-conditioning on a consecutive day. In previous work, either binocular disparity and occlusion, or a combination of monocular depth cues that included occlusion, internal occlusion, haze, and depth-from-shading, were used to control the rotation direction of disambiguated cubes. Here, we test the relative effectiveness of these two sets of depth cues in establishing the retinal location bias. Both cue sets were highly effective in establishing a perceptual bias on Day 1 as measured by the perceived rotation direction of ambiguous cubes. The effect of counter-conditioning on Day 2, on perceptual outcome for ambiguous cubes, was independent of whether the cue set was the same or different as Day 1. This invariance suggests that a common neural population instantiates the bias for rotation direction, regardless of the cue set used. However, in a further experiment where only disambiguated cubes were presented on Day 1, perceptual outcome of ambiguous cubes during Day 2 counter-conditioning showed that the monocular-only cue set was in fact more effective than disparity-plus-occlusion for causing long-term learning of the bias. These results can be reconciled if the conditioning effect of Day 1 ambiguous trials in the first experiment is taken into account (Harrison & Backus, 2010b). We suggest that monocular disambiguation leads to stronger bias either because it more strongly activates a single neural population that is necessary for perceiving rotation, or because ambiguous stimuli engage cortical areas that are also engaged by monocularly disambiguated stimuli but not by disparity-disambiguated stimuli.


Vision Research | 2014

A trained perceptual bias that lasts for weeks

Sarah J. Harrison; Benjamin T. Backus

Classical (Pavlovian) conditioning procedures can be used to bias the appearance of physical stimuli. Under natural conditions this form of perceptual learning could cause perception to become more accurate by changing prior belief to be in accord with what is statistically likely. However, for learning to be of functional significance, it must last until similar stimuli are encountered again. Here, we used the apparent rotation direction of a revolving wire frame (Necker) cube to test whether a learned perceptual bias is long lasting. Apparent rotation direction was trained to have a different bias at two different retinal locations by interleaving the presentation of ambiguous cubes with presentation of cubes that were disambiguated by disparity and occlusion cues. Four groups of eight subjects were subsequently tested either 1, 7, 14, or 28 days after initial training, respectively, using a counter-conditioning procedure. All four groups showed incomplete re-learning of the reversed contingency relationship during their second session. One group repeated the counter-conditioning and showed an increase in the reverse bias, showing that the first counter-conditioning session also had a long-lasting effect. The fact that the original learning was still evident four weeks after the initial training is consistent with the operation of a mechanism that ordinarily would improve the accuracy and efficiency of perception.


Journal of Vision | 2010

Disambiguating Necker cube rotation using a location cue: what types of spatial location signal can the visual system learn?

Sarah J. Harrison; Benjamin T. Backus


Journal of Vision | 2009

The influence of shape and skeletal axis structure on texture perception.

Sarah J. Harrison; Jacob Feldman


Journal of Vision | 2010

Uniformative trials are more effective than informative trials in learning a long term perceptual bias

Sarah J. Harrison; Benjamin T. Backus

Collaboration


Dive into the Sarah J. Harrison's collaboration.

Top Co-Authors

Avatar

Benjamin T. Backus

State University of New York College of Optometry

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anshul Jain

State University of New York College of Optometry

View shared research outputs
Top Co-Authors

Avatar

Ben Backus

State University of New York College of Optometry

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge