Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abigail Noyce is active.

Publication


Featured researches published by Abigail Noyce.


Attention Perception & Psychophysics | 2016

Short-term memory stores organized by information domain

Abigail Noyce; Nishmar Cestero; Barbara G. Shinn-Cunningham; David C. Somers

Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.


Journal of Vision | 2011

Surprises are mistakes: An EEG source localization study of prediction errors

Abigail Noyce; Robert Sekuler

The ability to recognize a familiar sequence, use that sequence to predict future events, and then monitor the prediction’s accuracy comprise core cognitive skills whose neural underpinnings are poorly understood. To characterize neural responses associated with prediction-violating events, we asked subjects to learn novel visuomotor sequences. In this task, subjects viewed a disk that traversed a quasi-random sequence of five linear motion components, and then tried to reproduce the disk’s path from memory. The fidelity of subjects’ imitations improved over four presentations of each sequence. To create unexpected, prediction-violating stimuli, deviant segments were occasionally inserted into a sequence with which the subject had become familiar. A high-density scalp EEG system examined the difference between signals evoked (i) by an expected, predictable motion component, and (ii) by an unexpected component. A realistic head model localized sources of neural activity generated as subjects viewed the two types of motion components. Although ventral pre-frontal areas were active throughout viewing of both expected and unexpected segments, the timing and location of other sources differentiated the two. Cerebellar areas were active only during segments whose directions were expected, beginning approximately 150 ms after the disk began to move. In contrast, during unexpected segments, anterior cingulate cortex showed activation, beginning approximately 300 ms after the disk began to move. The time course of such activation may shed light on processes that integrate sensory input with top-down predictions. Our results suggest that the mechanisms responsible for monitoring the validity of visual predictions for motions in a sequence are similar to those that detect errors in responses, as demonstrated previously in simpler, discrete motor tasks.


NeuroImage | 2018

Prediction of individualized task activation in sensory modality-selective frontal cortex with ‘connectome fingerprinting’

Sean Tobyne; David C. Somers; James Brissenden; Samantha W. Michalka; Abigail Noyce; David Osher

&NA; The human cerebral cortex is estimated to comprise 200–300 distinct functional regions per hemisphere. Identification of the precise anatomical location of an individuals unique set of functional regions is a challenge for neuroscience that has broad scientific and clinical utility. Recent studies have demonstrated the existence of four interleaved regions in lateral frontal cortex (LFC) that are part of broader visual attention and auditory attention networks (Michalka et al., 2015; Noyce et al., 2017; Tobyne et al., 2017). Due to a large degree of inter‐subject anatomical variability, identification of these regions depends critically on within‐subject analyses. Here, we demonstrate that, for both sexes, an individuals unique pattern of resting‐state functional connectivity can accurately identify their specific pattern of visual‐ and auditory‐selective working memory and attention task activation in lateral frontal cortex (LFC) using “connectome fingerprinting.” Building on prior techniques (Saygin et al., 2011; Osher et al., 2016; Tavor et al., 2016; Smittenaar et al., 2017; Wang et al., 2017; Parker Jones et al., 2017), we demonstrate here that connectome fingerprint predictions are far more accurate than group‐average predictions and match the accuracy of within‐subject task‐based functional localization, while requiring less data. These findings are robust across brain parcellations and are improved with penalized regression methods. Because resting‐state data can be easily and rapidly collected, these results have broad implications for both clinical and research investigations of frontal lobe function. Our findings also provide a set of recommendations for future research.


Journal of the Acoustical Society of America | 2016

Nonparametric statistical approaches to neuroimaging data

Abigail Noyce; Robert Sekuler

Neuroimaging, including electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI), is a rich source of information, allowing perceptual researchers measure neural responses to sensory inputs. Scalp EEG’s high temporal resolution makes it a method of particular interest to auditory scientists who want to identify (and possibly model) the spatial and temporal extent of a difference between the two conditions. Analyzing neuroimaging data, however, poses several challenges. First, collecting data from tens or even hundreds of sensors and hundreds of time points creates a multiple comparisons problem. Second, neuroimaging data are often quite noisy (as is the brain itself). Finally, imaging data distributions are often incompatible with the assumptions of traditional parametric statistics—most notably, data are often non-normal, and with unequal variance. We illustrate strategies for addressing these challenges in the context of a selective attention study. EE...


Journal of the Acoustical Society of America | 2015

Understanding cross-modal interactions of spatial and temporal information from a cortical perspective

Barbara G. Shinn-Cunningham; Samantha W. Michalka; Abigail Noyce; David C. Somers

In the ventriloquism effect, perception of visual spatial information biases judgments of auditory events, yet there is little effect of auditory stimuli on perception of visual locations. In the flash-beep illusion, the number of auditory “beeps” biases judgments of the count of visual flashes, but visual flashes have little effect on perceived number of auditory events. These asymmetries suggest that vision is the “natural” sensory modality for coding spatial information, while audition is specialized for representing temporal information. Here, we review recent behavioral and neuroimaging evidence from our labs exploring the neural underpinnings of such perceptual asymmetries. Specifically, we find that there are distinct frontal cortical networks associated with visual information and auditory information. Yet these networks can be recruited by the other sensory modality, depending on task demands. For instance, when judging spatial aspects of auditory inputs, neural areas associated with visual processing are recruited; when judging temporal aspects of visual inputs, areas associated with auditory processing are activated. We also find another asymmetry: knowing when a spatial event is going to occur helps listeners judge location, but knowing where an event will occur does not help judgments about that events timing. These kinds of studies help elucidate how temporal and spatial information is encoded in the brain, and the neural mechanisms by which visual-spatial and auditory-temporal information interact.


Journal of Vision | 2015

Space Depends On Time: Informational Asymmetries in Visual and Auditory Short-Term Memory

Abigail Noyce; Nishmar Cestero; Barbara G. Shinn-Cunningham; David C. Somers

Sensory modalities vary in adeptness for spatial and temporal information domains (Welch & Warren, 1980). Recent work suggests that attention and short-term memory (STM) recruit this variability (Michalka et al., submitted). Here, we investigate the relationships among visual and auditory modalities, and spatial and temporal STM. We developed stimuli comprising short sequences of visual events (instantaneous image changes) or auditory events (50ms complex tones). Each event within a sequence had a unique spatial location and a unique inter-event interval. These stimuli were used in a STM change-detection task. On each trial, two successive sequences were presented; a change could occur among the locations, the intervals, both, or neither. Subjects attended to either space (the sequence of locations), or time (the sequence of inter-event intervals), and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal (both sequences presented in the same modality) and crossmodal (sequence 1 visual and sequence 2 auditory, or vice versa) trials for both tasks. We found a strong modality appropriateness effect, with best temporal performance on unimodal auditory trials, and best spatial performance on unimodal visual trials. The order of modalities on crossmodal trials mattered for space (benefit for visual sequence 1) but not for time, supporting a domain recruitment account of spatial STM. We also investigated cross-domain interactions by measuring whether instability of spatial location affected change detection for intervals, or vice versa. Changes in timing from sequence 1 to sequence 2 substantially impaired change detection for locations, while changes in locations did not impair change detection for intervals. These results suggest that spatial and temporal STM are asymmetrically related, such that timing information facilitates monitoring a series of locations, but spatial knowledge is unnecessary when monitoring a series of intervals. Meeting abstract presented at VSS 2015.


Frontiers in Human Neuroscience | 2014

Violations of newly-learned predictions elicit two distinct P3 components.

Abigail Noyce; Robert Sekuler

Sensitivity to the environments sequential regularities makes it possible to predict upcoming sensory events. To investigate the mechanisms that monitor such predictions, we recorded scalp EEG as subjects learned to reproduce sequences of motions. Each sequence was seen and reproduced four successive times, with occasional deviant directions of motion inserted into otherwise-familiar and predictable sequences. To dissociate the neural activity associated with encoding new items from that associated with detecting sequence deviants, we measured ERPs to new, familiar, and deviant sequence items. Both new and deviant sequence items evoked enhanced P3 responses, with the ERP to deviant items encompassing both P300-like and Novelty P3-like subcomponents with distinct timing and topographies. These results confirm that the neural response to deviant items differs from that to new items, and that unpredicted events in newly-learned sequences are identified by processes similar to those monitoring stable sequential regularities.


Journal of Vision | 2011

Eye movements and imitation learning: Intentional disruption of expectation

Jessica Maryott; Abigail Noyce; Robert Sekuler


Neuropsychologia | 2014

Oddball distractors demand attention: neural and behavioral responses to predictability in the flanker task

Abigail Noyce; Robert Sekuler


Journal of Vision | 2017

Predicting an individual's own Dorsal Attention Network from their functional connectivity fingerprint

David E. Osher; Sean Tobyne; James Brissenden; Abigail Noyce; Samantha W. Michalka; Emily Levin; David C. Somers

Collaboration


Dive into the Abigail Noyce's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samantha W. Michalka

Franklin W. Olin College of Engineering

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge