Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin N. O'Connor is active.

Publication


Featured researches published by Kevin N. O'Connor.


The Journal of Neuroscience | 2013

Differences between Primary Auditory Cortex and Auditory Belt Related to Encoding and Choice for AM Sounds

Mamiko Niwa; Jeffrey S. Johnson; Kevin N. O'Connor; Mitchell L. Sutter

We recorded from middle–lateral (ML) and primary (A1) auditory cortex while macaques discriminated amplitude-modulated (AM) noise from unmodulated noise. Compared with A1, ML had a higher proportion of neurons that encoded increasing AM depth by decreasing their firing rates (“decreasing” neurons), particularly with responses that were not synchronized to the modulation. Choice probability (CP) analysis revealed that A1 and ML activity were different during the first half of the test stimulus. In A1, significant CP began before the test stimulus, remained relatively constant (or increased slightly) during the stimulus, and increased greatly within 200 ms of lever release. Neurons in ML behaved similarly, except that significant CP disappeared during the first half of the stimulus and reappeared during the second half and prerelease periods. CP differences between A1 and ML depend on neural response type. In ML (but not A1), when activity was lower during the first half of the stimulus in nonsynchronized, decreasing neurons, the monkey was more likely to report AM. Neurons that both increased firing rate with increasing modulation depth (“increasing” neurons) and synchronized their responses to AM had similar choice-related activity dynamics in ML and A1. These results suggest that, when ascending the auditory system, there is a transformation in coding AM from primarily synchronized increasing responses in A1 to nonsynchronized and dual (increasing/decreasing) coding in ML. This sensory transformation is accompanied by changes in the timing of activity related to choice, suggesting functional differences between A1 and ML related to attention and/or behavior.


Journal of Neurophysiology | 2012

Ability of primary auditory cortical neurons to detect amplitude modulation with rate and temporal codes: neurometric analysis

Jeffrey S. Johnson; Pingbo Yin; Kevin N. O'Connor; Mitchell L. Sutter

Amplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1). The ability of neurons to detect partial AM with rate and temporal codes was assessed with signal detection methods. On average, single-cell synchrony was as or more sensitive than spike count in modulation detection. Cells are less sensitive to modulation depth if tested away from their best modulation frequency, particularly for temporal measures. Mean neural modulation detection thresholds in A1 are not as sensitive as behavioral thresholds, but with phase locking the most sensitive neurons are more sensitive, suggesting that for temporal measures the lower-envelope principle cannot account for thresholds. Three methods of preanalysis pooling of spike trains (multiunit, similar to convergence from a cortical column; within cell, similar to convergence of cells with matched response properties; across cell, similar to indiscriminate convergence of cells) all result in an increase in neural sensitivity to modulation depth for both temporal and rate codes. For the across-cell method, pooling of a few dozen cells can result in detection thresholds that approximate those of the behaving animal. With synchrony measures, indiscriminate pooling results in sensitive detection of modulation frequencies between 20 and 60 Hz, suggesting that differences in AM response phase are minor in A1.


Nature Neuroscience | 2002

A new window on sound.

Bruno A. Olshausen; Kevin N. O'Connor

Auditory filters must trade off frequency tuning against temporal precision. The compromise achieved by the mammalian cochlea seems well matched to the sounds of the natural environment.


Frontiers in Systems Neuroscience | 2010

Complex spectral interactions encoded by auditory cortical neurons: relationship between bandwidth and pattern.

Kevin N. O'Connor; Pingbo Yin; Christopher I. Petkov; Mitchell L. Sutter

The focus of most research on auditory cortical neurons has concerned the effects of rather simple stimuli, such as pure tones or broad-band noise, or the modulation of a single acoustic parameter. Extending these findings to feature coding in more complex stimuli such as natural sounds may be difficult, however. Generalizing results from the simple to more complex case may be complicated by non-linear interactions occurring between multiple, simultaneously varying acoustic parameters in complex sounds. To examine this issue in the frequency domain, we performed a parametric study of the effects of two global features, spectral pattern (here ripple frequency) and bandwidth, on primary auditory (A1) neurons in awake macaques. Most neurons were tuned for one or both variables and most also displayed an interaction between bandwidth and pattern implying that their effects were conditional or interdependent. A spectral linear filter model was able to qualitatively reproduce the basic effects and interactions, indicating that a simple neural mechanism may be able to account for these interdependencies. Our results suggest that the behavior of most A1 neurons is likely to depend on multiple parameters, and so most are unlikely to respond independently or invariantly to specific acoustic features.


Journal of Cognitive Neuroscience | 2000

Global Spectral and Location Effects in Auditory Perceptual Grouping

Kevin N. O'Connor; Mitchell L. Sutter

An important problem in cognitive and systems neuroscience concerns the extent to which perceptual organization can be explained by local, peripheral physiological mechanisms, or rather by more global, central, and higher-level processes. Though central in vision research, this issue has received little attention in the field of audition. One claim is that auditory-perceptual grouping mechanisms, possibly related to visual figure-from-ground segregation or pop-out, are low level, resulting from local processing in the frequency domain. However, no experiments have been performed specifically to test this question. We examined the effects of perceptual grouping on detection for reversal of two repeated target tones, one constant in frequency (1030 Hz), the other free to vary between trials (1045-8580 Hz). Detection was examined in the presence of a 1000-Hz background tone that repeated between target presentations. By varying the frequency of the high-target tone, this task was designed to modulate grouping between the background and low-target tones, thereby affecting reversal detection. We predicted that at large target frequency differences (f), the high-target tone would segregate from the background and low-target tones, and so render the background and low-target tones less distinct. We found that reversal detection declined from optimal levels with increasing f, and that performance was improved by spatially separating the location of the target and background sounds by at least 32. These results demonstrate that global frequency integration over at least three octaves occurs through grouping, and that grouping is affected by source location. This implies that auditory-perceptual grouping involves global neural processing, i.e., the participation of neurons with very broad frequency input that are also sensitive to spatial location.


The Journal of Neuroscience | 2017

Feature Selective Attention Adaptively Shifts Noise Correlations in Primary Auditory Cortex

Joshua D. Downer; Brittany Rapone; Jessica Verhein; Kevin N. O'Connor; Mitchell L. Sutter

Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations (rnoise) within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on rnoise depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in rnoise can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments. SIGNIFICANCE STATEMENT Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations (rnoise) in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on rnoise depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning to the distractor feature as well. We suggest that these effects represent an efficient process by which sensory cortex simultaneously enhances relevant information and suppresses irrelevant information.


Journal of the Acoustical Society of America | 2017

The effects of varying tympanic-membrane material properties on human middle-ear sound transmission in a three-dimensional finite-element modela)

Kevin N. O'Connor; Hongxue Cai; Sunil Puria

An anatomically based three-dimensional finite-element human middle-ear (ME) model is used to test the sensitivity of ME sound transmission to tympanic-membrane (TM) material properties. The baseline properties produce responses comparable to published measurements of ear-canal input impedance and power reflectance, stapes velocity normalized by ear-canal pressure (PEC), and middle-ear pressure gain (MEG), i.e., cochlear-vestibule pressure (PV) normalized by PEC. The mass, Youngs modulus (ETM), and shear modulus (GTM) of the TM are varied, independently and in combination, over a wide range of values, with soft and bony TM-annulus boundary conditions. MEG is recomputed and plotted for each case, along with summaries of the magnitude and group-delay deviations from the baseline over low (below 0.75 kHz), mid (0.75-5 kHz), and high (above 5 kHz) frequencies. The MEG magnitude varies inversely with increasing TM mass at high frequencies. Increasing ETM boosts high frequencies and attenuates low and mid frequencies, especially with a bony TM annulus and when GTM varies in proportion to ETM, as for an isotropic material. Increasing GTM on its own attenuates low and mid frequencies and boosts high frequencies. The sensitivity of MEG to TM material properties has implications for model development and the interpretation of experimental observations.


Journal of the Acoustical Society of America | 2013

Hierarchical effects of attention on amplitude modulation encoding in auditory cortex

Mitchell L. Sutter; Kevin N. O'Connor; Joshua D. Downer; Jeffrey S. Johnson; Mamiko Niwa

How attention influences single neuron responses in the auditory system remains unresolved. We found that when monkeys actively discriminated temporally amplitude modulated (AM) from unmodulated sounds, primary auditory (A1) and middle lateral belt (ML) cortical neurons better discriminated those sounds than when the monkeys were passively listening. This was true for both rate and temporal codes. Differences in AM responses and effects of attentional modulation on those responses suggest: (1) attention improves neurons’ ability to temporally follow modulation (2) non-synchronized responses play an important role in AM discrimination (3) ML attention-related increases in activity are stronger and longer-lasting for more difficult stimuli consistent with stimulus specific attention, whereas the results in A1 are more consistent with multiplicative nonlinearity, and (4) A1 and ML code AM differently; ML uses both increases and decreases in firing rate to encode modulation, while A1 primarily uses activity i...


Brain and behavior | 2017

Atypical antipsychotic therapy in Parkinson's disease psychosis: A retrospective study

Mei Yuan; Laura Sperry; Norika Malhado-Chang; Alexandra Duffy; Vicki Wheelock; Sarah Tomaszewski Farias; Kevin N. O'Connor; John Olichney; Kiarash Shahlaie; Lin Zhang

Parkinsons disease psychosis (PDP) is a frequent complication of idiopathic Parkinsons disease (iPD) with significant impact on quality of life and association with poorer outcomes. Atypical antipsychotic drugs (APDs) are often used for the treatment of PDP; however, their use is often complicated by adverse drug reactions (ADRs). In this study, we present patients with PDP who were treated with the most commonly used atypical antipsychotic agents and review their respective ADRs.


Journal of the Acoustical Society of America | 2015

Segregating two simultaneous sounds in elevation using temporal envelope: Human psychophysics and a physiological model

Jeffrey S. Johnson; Kevin N. O'Connor; Mitchell L. Sutter

The ability to segregate simultaneous sound sources based on their spatial locations is an important aspect of auditory scene analysis. While the role of sound azimuth in segregation is well studied, the contribution of sound elevation remains unknown. Although previous studies in humans suggest that elevation cues alone are not sufficient to segregate simultaneous broadband sources, the current study demonstrates they can suffice. Listeners segregating a temporally modulated noise target from a simultaneous unmodulated noise distracter differing in elevation fall into two statistically distinct groups: one that identifies target direction accurately across a wide range of modulation frequencies (MF) and one that cannot identify target direction accurately and, on average, reports the opposite direction of the target for low MF. A non-spiking model of inferior colliculus neurons that process single-source elevation cues suggests that the performance of both listener groups at the population level can be accounted for by the balance of excitatory and inhibitory inputs in the model. These results establish the potential for broadband elevation cues to contribute to the computations underlying sound source segregation and suggest a potential mechanism underlying this contribution.

Collaboration


Dive into the Kevin N. O'Connor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mamiko Niwa

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. L. Sutter

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge