Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason Rajsic is active.

Publication


Featured researches published by Jason Rajsic.


Attention Perception & Psychophysics | 2014

Asymmetrical Access to Color and Location in Visual Working Memory

Jason Rajsic; Daryl E. Wilson

Models of visual working memory (VWM) have benefitted greatly from the use of the delayed-matching paradigm. However, in this task, the ability to recall a probed feature is confounded with the ability to maintain the proper binding between the feature that is to be reported and the feature (typically location) that is used to cue a particular item for report. Given that location is typically used as a cue-feature, we used the delayed-estimation paradigm to compare memory for location to memory for color, rotating which feature was used as a cue and which was reported. Our results revealed several novel findings: 1) the likelihood of reporting a probed object’s feature was superior when reporting location with a color cue than when reporting color with a location cue; 2) location report errors were composed entirely of swap errors, with little to no random location reports; and 3) both colour and location reports greatly benefitted from the presence of nonprobed items at test. This last finding suggests that it is uncertainty over the bindings between locations and colors at memory retrieval that drive swap errors, not at encoding. We interpret our findings as consistent with a representational architecture that nests remembered object features within remembered locations.


Psychonomic Bulletin & Review | 2016

Object-substitution masking degrades the quality of conscious object representations

Geoffrey Harrison; Jason Rajsic; Daryl E. Wilson

Object-substitution masking (OSM) is a unique paradigm for the examination of object updating processes. However, existing models of OSM are underspecified with respect to the impact of object updating on the quality of target representations. Using two paradigms of OSM combined with a mixture model analysis we examine the impact of post-perceptual processes on a target’s representational quality within conscious awareness. We conclude that object updating processes responsible for OSM cause degradation in the precision of object representations. These findings contribute to a growing body of research advocating for the application of mixture model analysis to the study of how cognitive processes impact the quality (i.e., precision) of object representations.


Journal of Experimental Psychology: Human Perception and Performance | 2015

Confirmation bias in visual search

Jason Rajsic; Daryl E. Wilson; Jay Pratt

In a series of experiments, we investigated the ubiquity of confirmation bias in cognition by measuring whether visual selection is prioritized for information that would confirm a proposition about a visual display. We show that attention is preferentially deployed to stimuli matching a target template, even when alternate strategies would reduce the number of searches necessary. We argue that this effect is an involuntary consequence of goal-directed processing, and show that it can be reduced when ample time is provided to prepare for search. These results support the notion that capacity-limited cognitive processes contribute to the biased selection of information that characterizes confirmation bias. (PsycINFO Database Record


Attention Perception & Psychophysics | 2017

Learned value and object perception: Accelerated perception or biased decisions?

Jason Rajsic; Harendri Perera; Jay Pratt

Learned value is known to bias visual search toward valued stimuli. However, some uncertainty exists regarding the stage of visual processing that is modulated by learned value. Here, we directly tested the effect of learned value on preattentive processing using temporal order judgments. Across four experiments, we imbued some stimuli with high value and some with low value, using a nonmonetary reward task. In Experiment 1, we replicated the value-driven distraction effect, validating our nonmonetary reward task. Experiment 2 showed that high-value stimuli, but not low-value stimuli, exhibit a prior-entry effect. Experiment 3, which reversed the temporal order judgment task (i.e., reporting which stimulus came second), showed no prior-entry effect, indicating that although a response bias may be present for high-value stimuli, they are still reported as appearing earlier. However, Experiment 4, using a simultaneity judgment task, showed no shift in temporal perception. Overall, our results support the conclusion that learned value biases perceptual decisions about valued stimuli without speeding preattentive stimulus processing.


Psychonomic Bulletin & Review | 2014

Long-term facilitation of return: A response-retrieval effect

Jason Rajsic; Yena Bi; Daryl E. Wilson

The present study used a target–target procedure to examine the extent to which perceptual and response factors contribute to inhibition of return (IOR) in a visual discrimination task. When the target was perceptually identical to the previous target and the required response was the same, facilitation was observed for both standard and long-term target–target stimulus onset asynchronies (SOAs). When the color of the previous target differed from that of the current target but the response remained the same, facilitation was reduced in both the standard SOA and long-term SOA conditions. Finally, IOR was observed for both standard and long-term SOAs only in the condition in which there was a change in response. This pattern of inhibition and facilitation provides new evidence that the responses previously associated with a location play an important role in the ability to respond to a stimulus. We interpret this finding as consistent with a framework in which the involuntary retrieval of bound stimulus–response episodes contributes to response compatibility effects in visual stimulus discrimination.


Attention Perception & Psychophysics | 2017

Intervening response events between identification targets do not always turn repetition benefits into repetition costs

Matthew D. Hilchey; Jason Rajsic; Greg Huffman; Jay Pratt

When there is a relatively long interval between two successive stimuli that must be detected or localized, there are robust processing costs when the stimuli appear at the same location. However, when two successive visual stimuli that must be identified appear at the same location, there are robust same location costs only when the two stimuli differ in their responses; otherwise same location benefits are observed. Two separate frameworks that inhibited attentional orienting and episodic integration, respectively, have been proposed to account for these patterns. Recent findings hint at a possible reconciliation between these frameworks—requiring a response to an event in between two successive visual stimuli may unmask same stimulus and same location costs that are otherwise obscured by episodic integration benefits in identification tasks. We tested this hybrid account by integrating an intervening response event with an identification task that would otherwise generate the boundary between same location benefits and costs. Our results showed that the intervening event did not alter the boundary between location repetition benefits and costs nor did it reliably or unambiguously reverse the common stimulus-response repetition benefit. The findings delimit the usefulness of an intervening event for disrupting episodic integration, suggesting that effects from intervening response events are tenuous. The divide between attention and feature integration accounts is delineated in the context of methodological and empirical considerations.


NeuroImage | 2017

Neural representation of geometry and surface properties in object and scene perception

Matthew X. Lowe; Jason Rajsic; Jason P. Gallivan; Susanne Ferber; Jonathan S. Cant

&NA; Multiple cortical regions are crucial for perceiving the visual world, yet the processes shaping representations in these regions are unclear. To address this issue, we must elucidate how perceptual features shape representations of the environment. Here, we explore how the weighting of different visual features affects neural representations of objects and scenes, focusing on the scene‐selective parahippocampal place area (PPA), but additionally including the retrosplenial complex (RSC), occipital place area (OPA), lateral occipital (LO) area, fusiform face area (FFA) and occipital face area (OFA). Across three experiments, we examined functional magnetic resonance imaging (fMRI) activity while human observers viewed scenes and objects that varied in geometry (shape/layout) and surface properties (texture/material). Interestingly, we found equal sensitivity in the PPA for these properties within a scene, revealing that spatial‐selectivity alone does not drive activation within this cortical region. We also observed sensitivity to object texture in PPA, but not to the same degree as scene texture, and representations in PPA varied when objects were placed within scenes. We conclude that PPA may process surface properties in a domain‐specific manner, and that the processing of scene texture and geometry is equally‐weighted in PPA and may be mediated by similar underlying neuronal mechanisms.


Visual Cognition | 2017

Response-mediated spatial priming despite perfectly valid target location cues and intervening response events

Matthew D. Hilchey; Jason Rajsic; Greg Huffman; Jay Pratt

ABSTRACT Attentional effects are often inferred from keypress reaction time (RT) studies when two sequentially presented stimuli, appearing at the same location, generate costs or benefits. The universality of these attentional attributions is challenged by data from perceptual discrimination tasks, which reveal that location repetition benefits and costs depend on whether a prior response repeats or switches, respectively. According to dual-stage accounts, these post-attentional effects may be abolished by making responses in between two target stimuli or by increasing target location certainty, leaving only attentional effects. Here, we test these accounts by requiring responses to stimuli in between targets and by increasing target location certainty with 100% valid location cues. Contrary to expectations, there was no discernible effect of cueing on any repetition effects, although the intervening response diminished stimulus-response repetition effects while subtly reducing location-response repetition effects. Despite this, there was little unambiguous evidence of attentional effects independent of responding. Taken together, the results further highlight the robustness of location-response repetition effects in perceptual discrimination tasks, which challenge whether there are enduring attentional effects in this paradigm.


Psychological Science | 2018

Dissociating Orienting Biases From Integration Effects With Eye Movements

Matthew D. Hilchey; Jason Rajsic; Greg Huffman; Raymond M. Klein; Jay Pratt

Despite decades of research, the conditions under which shifts of attention to prior target locations are facilitated or inhibited remain unknown. This ambiguity is a product of the popular feature discrimination task, in which attentional bias is commonly inferred from the efficiency by which a stimulus feature is discriminated after its location has been repeated or changed. Problematically, these tasks lead to integration effects; effects of target-location repetition appear to depend entirely on whether the target feature or response also repeats, allowing for several possible inferences about orienting bias. To parcel out integration effects and orienting biases, we designed the present experiments to require localized eye movements and manual discrimination responses to serially presented targets with randomly repeating locations. Eye movements revealed consistent biases away from prior target locations. Manual discrimination responses revealed integration effects. These data collectively revealed inhibited reorienting and integration effects, which resolve the ambiguity and reconcile episodic integration and attentional orienting accounts.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2017

Accessibility Limits Recall from Visual Working Memory.

Jason Rajsic; Garrett Swan; Daryl E. Wilson; Jay Pratt

In this article, we demonstrate limitations of accessibility of information in visual working memory (VWM). Recently, cued-recall has been used to estimate the fidelity of information in VWM, where the feature of a cued object is reproduced from memory (Bays, Catalao, & Husain, 2009; Wilken & Ma, 2004; Zhang & Luck, 2008). Response error in these tasks has been largely studied with respect to failures of encoding and maintenance; however, the retrieval operations used in these tasks remain poorly understood. By varying the number and type of object features provided as a cue in a visual delayed-estimation paradigm, we directly assess the nature of retrieval errors in delayed estimation from VWM. Our results demonstrate that providing additional object features in a single cue reliably improves recall, largely by reducing swap, or misbinding, responses. In addition, performance simulations using the binding pool model (Swan & Wyble, 2014) were able to mimic this pattern of performance across a large span of parameter combinations, demonstrating that the binding pool provides a possible mechanism underlying this pattern of results that is not merely a symptom of one particular parametrization. We conclude that accessing visual working memory is a noisy process, and can lead to errors over and above those of encoding and maintenance limitations.

Collaboration


Dive into the Jason Rajsic's collaboration.

Top Co-Authors

Avatar

Jay Pratt

University of Toronto

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Henry Liu

University of Toronto

View shared research outputs
Researchain Logo
Decentralizing Knowledge