Katherine Bettencourt
Harvard University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Katherine Bettencourt.
The Journal of Neuroscience | 2010
Summer Sheremata; Katherine Bettencourt; David C. Somers
Visual short-term memory (VSTM) briefly maintains a limited sampling from the visual world. Activity in the intraparietal sulcus (IPS) tightly correlates with the number of items stored in VSTM. This activity may occur in or near to multiple distinct visuotopically mapped cortical areas that have been identified in IPS. To understand the topographic and spatial properties of VSTM, we investigated VSTM activity in visuotopic IPS regions using functional magnetic resonance imaging. VSTM drove areas IPS0–2, but largely spared IPS3–4. Under visual stimulation, these areas in both hemispheres code the contralateral visual hemifield. In contrast to the hemispheric symmetry observed with visual stimulation, an asymmetry emerged during VSTM with increasing memory load. The left hemisphere exhibited load-dependent activity only for contralateral memory items; right hemisphere activity reflected VSTM load regardless of visual-field location. Our findings demonstrate that VSTM induces a switch in spatial representation in right hemisphere IPS from contralateral to full-field coding. The load dependence of right hemisphere effects argues that memory-dependent and/or attention-dependent processes drive this change in spatial processing. This offers a novel means for investigating spatial-processing impairments in hemispatial neglect.
Journal of Vision | 2009
Katherine Bettencourt; David C. Somers
Mounting evidence suggests that visual attention may be simultaneously deployed to multiple distinct object locations, but the constraints upon this multi-object attentional system are still debated. Results from multiple object tracking (MOT) experiments have been interpreted as revealing a fixed attentional capacity limit of 4 objects, while other evidence has suggested that attentional capacity may be more fluid. Here, we investigated the influence of target stimulus factors, such as speed and size, and of distractor filtering factors, such as number of distractors and screen density, on MOT performance. Each factor had significant effects on capacity, producing values that ranged from above 6 objects down to one object, depending on the task demands. Although our results support the view that crowding effects modulate the effective capacity of attention, we also find evidence that central processes related to distractor suppression and target enhancement modulate capacity.
Journal of Vision | 2011
Katherine Bettencourt; Samantha W. Michalka; David C. Somers
Both visual attention and visual short-term memory (VSTM) have been shown to have capacity limits of 4 ± 1 objects, driving the hypothesis that they share a visual processing buffer. However, these capacity limitations also show strong individual differences, making the degree to which these capacities are related unclear. Moreover, other research has suggested a distinction between attention and VSTM buffers. To explore the degree to which capacity limitations reflect the use of a shared visual processing buffer, we compared individual subjects capacities on attentional and VSTM tasks completed in the same testing session. We used a multiple object tracking (MOT) and a VSTM change detection task, with varying levels of distractors, to measure capacity. Significant correlations in capacity were not observed between the MOT and VSTM tasks when distractor filtering demands differed between the tasks. Instead, significant correlations were seen when the tasks shared spatial filtering demands. Moreover, these filtering demands impacted capacity similarly in both attention and VSTM tasks. These observations fail to support the view that visual attention and VSTM capacity limits result from a shared buffer but instead highlight the role of the resource demands of underlying processes in limiting capacity.
Journal of Neurophysiology | 2016
Katherine Bettencourt; Yaoda Xu
Based on different cognitive tasks and mapping methods, the human intraparietal sulcus (IPS) has been subdivided according to multiple different organizational schemes. The presence of topographically organized regions throughout IPS indicates a strong location-based processing in this brain region. However, visual short-term memory (VSTM) studies have shown that while a region in the inferior IPS region (inferior IPS) is involved in object individuation and selection based on location, a region in the superior IPS (superior IPS) primarily encodes and stores object featural information. Here, we determined the localization of these two VSTM IPS regions with respect to the topographic IPS regions in individual participants and the role of different IPS regions in location- and feature-based processing. Anatomically, inferior IPS showed an 85.2% overlap with topographic IPS regions, with the greatest overlap seen in V3A and V3B, and superior IPS showed a 73.6% overall overlap, with the greatest overlap seen in IPS0-2. Functionally, there appeared to be a partial overlap between IPS regions involved in location- and feature-based processing, with more inferior and medial regions showing a stronger location-based processing and more superior and lateral regions showing a stronger feature-based processing. Together, these results suggest that understanding the multiplex nature of IPS in visual cognition may not be reduced to examining the functions of the different IPS topographic regions, but rather, it can only be accomplished by understanding how regions identified by different tasks and methods may colocalize with each other.
Journal of Vision | 2015
Katherine Bettencourt; Yaoda Xu
Recent work has shown that the human parietal cortex, in particular superior intraparietal sulcus (IPS), plays a central role in visual short-term memory (VSTM) storage. However, much remains unknown about the nature of these memory representations including whether they are similar or distinct from perceptual representations. Using fMRI multivariate pattern analysis (MVPA), it has been shown that VSTM representations in occipital cortex are highly similar to perceptual representations. Here, we tested whether the same would be true for VSTM representations in superior IPS. On the one hand, VSTM representations in superior IPS could simply be an extension of the sensory representations formed during perception. This would predict a high degree of similarity between VSTM and perceptual representations in this region. However, we have previously shown that VSTM representations in superior IPS, unlike those in occipital cortex, are robust to visual distraction. This suggests that the nature of VSTM representation in the two regions may differ, and that representations in superior IPS may be consolidated and thus, distinct from perceptual representations. In the present study, we had participants complete both a VSTM and perceptual task using face and gazebo stimuli. We then decoded between the two stimuli types both within a task and across the two tasks. To minimize any attention or memory effects in the perceptual task, participants performed a one back letter task at fixation. Pilot results showed decoding of both memory and perceptual information in superior IPS, and, importantly, successful cross decoding between the two tasks. This suggests that, just like in occipital cortex, VSTM information in superior IPS is represented in a similar manner to perceptual information. Meeting abstract presented at VSS 2015.
Journal of Vision | 2014
Katherine Bettencourt; Yaoda Xu
Journal of Vision | 2011
Katherine Bettencourt; Yaoda Xu
Journal of Vision | 2010
Katherine Bettencourt; David C. Somers
Journal of Vision | 2013
Katherine Bettencourt; Yaoda Xu
Journal of Vision | 2012
Maryam Vaziri Pashkam; Katherine Bettencourt; Yaoda Xu