Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ian H. Stevenson is active.

Publication


Featured researches published by Ian H. Stevenson.


Nature Neuroscience | 2011

How advances in neural recording affect data analysis

Ian H. Stevenson; Konrad P. Körding

Over the last five decades, progress in neural recording techniques has allowed the number of simultaneously recorded neurons to double approximately every 7 years, mimicking Moores law. Such exponential growth motivates us to ask how data analysis techniques are affected by progressively larger numbers of recorded neurons. Traditionally, neurons are analyzed independently on the basis of their tuning to stimuli or movement. Although tuning curve approaches are unaffected by growing numbers of simultaneously recorded neurons, newly developed techniques that analyze interactions between neurons become more accurate and more complex as the number of recorded neurons increases. Emerging data analysis techniques should consider both the computational costs and the potential for more accurate models associated with this exponential growth of the number of recorded neurons.


Journal of Applied Physics | 2006

Principles and mechanisms of gas sensing by GaN nanowires functionalized with gold nanoparticles

Vladimir Dobrokhotov; David N. McIlroy; M. Grant Norton; A. Abuzir; Wei Jiang Yeh; Ian H. Stevenson; R. Pouy; J. Bochenek; M. Cartwright; Lidong Wang; J. Dawson; Miles F. Beaux; Chris Berven

Electrical properties of a chemical sensor constructed from mats of GaN nanowires decorated with gold nanoparticles as a function of exposure to Ar, N2, and methane are presented. The Au nanoparticle decorated nanowires exhibited chemically selective electrical responses. The sensor exhibits a nominal response to Ar and slightly greater response for N2. Upon exposure to methane the conductivity is suppressed by 50% relative to vacuum. The effect is fully reversible and is independent of exposure history. We offer a model by which the change in the current is caused by a change in the depletion depth of the nanowires, the change in the depletion depth being due to an adsorbate induced change in the potential on the gold nanoparticles on the surface of the nanowires.


Current Opinion in Neurobiology | 2008

Inferring functional connections between neurons

Ian H. Stevenson; James M. Rebesco; Lee E. Miller; Konrad P. Körding

A central question in neuroscience is how interactions between neurons give rise to behavior. In many electrophysiological experiments, the activity of a set of neurons is recorded while sensory stimuli or movement tasks are varied. Tools that aim to reveal underlying interactions between neurons from such data can be extremely useful. Traditionally, neuroscientists have studied these interactions using purely descriptive statistics (cross-correlograms or joint peri-stimulus time histograms). However, the interpretation of such data is often difficult, particularly as the number of recorded neurons grows. Recent research suggests that model-based, maximum likelihood methods can improve these analyses. In addition to estimating neural interactions, application of these techniques has improved decoding of external variables, created novel interpretations of existing electrophysiological data, and may provide new insight into how the brain represents information.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2009

Bayesian Inference of Functional Connectivity and Network Structure From Spikes

Ian H. Stevenson; James M. Rebesco; Nicholas G. Hatsopoulos; Zach Haga; Lee E. Miller; Konrad P. Körding

Current multielectrode techniques enable the simultaneous recording of spikes from hundreds of neurons. To study neural plasticity and network structure it is desirable to infer the underlying functional connectivity between the recorded neurons. Functional connectivity is defined by a large number of parameters, which characterize how each neuron influences the other neurons. A Bayesian approach that combines information from the recorded spikes (likelihood) with prior beliefs about functional connectivity (prior) can improve inference of these parameters and reduce overfitting. Recent studies have used likelihood functions based on the statistics of point-processes and a prior that captures the sparseness of neural connections. Here we include a prior that captures the empirical finding that interactions tend to vary smoothly in time. We show that this method can successfully infer connectivity patterns in simulated data and apply the algorithm to spike data recorded from primary motor (M1) and premotor (PMd) cortices of a monkey. Finally, we present a new approach to studying structure in inferred connections based on a Bayesian clustering algorithm. Groups of neurons in M1 and PMd show common patterns of input and output that may correspond to functional assemblies.


Science | 2014

Spatially Distributed Local Fields in the Hippocampus Encode Rat Position

Gautam Agarwal; Ian H. Stevenson; Antal Berényi; Kenji Mizuseki; György Buzsáki; Friedrich T. Sommer

Extracting Spatial Information The location of a rat can be deciphered from hippocampal activity by detecting the firing of individual place-selective neurons. In contrast, the local field potential (LFP), which arises from the coherent voltage fluctuations of large hippocampal cell populations, has been hard to decode. Agarwal et al. (p. 626) worked out how to recover positional information exclusively from multiple-site LFP measurements in the rat hippocampus. The information was as precise as that derived from spiking place cells. The approach might also be applicable more generally for deciphering information from coherent population activity anywhere in the brain. Electrical fields within the hippocampus can now be decoded to reveal a rat’s location. Although neuronal spikes can be readily detected from extracellular recordings, synaptic and subthreshold activity remains undifferentiated within the local field potential (LFP). In the hippocampus, neurons discharge selectively when the rat is at certain locations, while LFPs at single anatomical sites exhibit no such place-tuning. Nonetheless, because the representation of position is sparse and distributed, we hypothesized that spatial information can be recovered from multiple-site LFP recordings. Using high-density sampling of LFP and computational methods, we show that the spatiotemporal structure of the theta rhythm can encode position as robustly as neuronal spiking populations. Because our approach exploits the rhythmicity and sparse structure of neural activity, features found in many brain regions, it is useful as a general tool for discovering distributed LFP codes.


Frontiers in Systems Neuroscience | 2010

Rewiring Neural Interactions by Micro-Stimulation

James M. Rebesco; Ian H. Stevenson; Konrad P. Körding; Sara A. Solla; Lee E. Miller

Plasticity is a crucial component of normal brain function and a critical mechanism for recovery from injury. In vitro, associative pairing of presynaptic spiking and stimulus-induced postsynaptic depolarization causes changes in the synaptic efficacy of the presynaptic neuron, when activated by extrinsic stimulation. In vivo, such paradigms can alter the responses of whole groups of neurons to stimulation. Here, we used in vivo spike-triggered stimulation to drive plastic changes in rat forelimb sensorimotor cortex, which we monitored using a statistical measure of functional connectivity inferred from the spiking statistics of the neurons during normal, spontaneous behavior. These induced plastic changes in inferred functional connectivity depended on the latency between trigger spike and stimulation, and appear to reflect a robust reorganization of the network. Such targeted connectivity changes might provide a tool for rerouting the flow of information through a network, with implications for both rehabilitation and brain–machine interface applications.


Journal of Neurophysiology | 2011

Statistical assessment of the stability of neural movement representations

Ian H. Stevenson; Anil Cherian; Brian M. London; Nicholas A. Sachs; Eric W. Lindberg; Jacob Reimer; Marc W. Slutzky; Nicholas G. Hatsopoulos; Lee E. Miller; Konrad P. Körding

In systems neuroscience, neural activity that represents movements or sensory stimuli is often characterized by spatial tuning curves that may change in response to training, attention, altered mechanics, or the passage of time. A vital step in determining whether tuning curves change is accounting for estimation uncertainty due to measurement noise. In this study, we address the issue of tuning curve stability using methods that take uncertainty directly into account. We analyze data recorded from neurons in primary motor cortex using chronically implanted, multielectrode arrays in four monkeys performing center-out reaching. With the use of simulations, we demonstrate that under typical experimental conditions, the effect of neuronal noise on estimated preferred direction can be quite large and is affected by both the amount of data and the modulation depth of the neurons. In experimental data, we find that after taking uncertainty into account using bootstrapping techniques, the majority of neurons appears to be very stable on a timescale of minutes to hours. Lastly, we introduce adaptive filtering methods to explicitly model dynamic tuning curves. In contrast to several previous findings suggesting that tuning curves may be in constant flux, we conclude that the neural representation of limb movement is, on average, quite stable and that impressions to the contrary may be largely the result of measurement noise.


PLOS Computational Biology | 2012

Functional Connectivity and Tuning Curves in Populations of Simultaneously Recorded Neurons

Ian H. Stevenson; Brian M. London; Emily R. Oby; Nicholas A. Sachs; Jacob Reimer; Bernhard Englitz; Stephen V. David; Shihab A. Shamma; Timothy J. Blanche; Kenji Mizuseki; Amin Zandvakili; Nicholas G. Hatsopoulos; Lee E. Miller; Konrad P. Körding

How interactions between neurons relate to tuned neural responses is a longstanding question in systems neuroscience. Here we use statistical modeling and simultaneous multi-electrode recordings to explore the relationship between these interactions and tuning curves in six different brain areas. We find that, in most cases, functional interactions between neurons provide an explanation of spiking that complements and, in some cases, surpasses the influence of canonical tuning curves. Modeling functional interactions improves both encoding and decoding accuracy by accounting for noise correlations and features of the external world that tuning curves fail to capture. In cortex, modeling coupling alone allows spikes to be predicted more accurately than tuning curve models based on external variables. These results suggest that statistical models of functional interactions between even relatively small numbers of neurons may provide a useful framework for examining neural coding.


PLOS Computational Biology | 2009

Bayesian integration and non-linear feedback control in a full-body motor task

Ian H. Stevenson; Hugo L. Fernandes; Iris Vilares; Kunlin Wei; Konrad P. Körding

A large number of experiments have asked to what degree human reaching movements can be understood as being close to optimal in a statistical sense. However, little is known about whether these principles are relevant for other classes of movements. Here we analyzed movement in a task that is similar to surfing or snowboarding. Human subjects stand on a force plate that measures their center of pressure. This center of pressure affects the acceleration of a cursor that is displayed in a noisy fashion (as a cloud of dots) on a projection screen while the subject is incentivized to keep the cursor close to a fixed position. We find that salient aspects of observed behavior are well-described by optimal control models where a Bayesian estimation model (Kalman filter) is combined with an optimal controller (either a Linear-Quadratic-Regulator or Bang-bang controller). We find evidence that subjects integrate information over time taking into account uncertainty. However, behavior in this continuous steering task appears to be a highly non-linear function of the visual feedback. While the nervous system appears to implement Bayes-like mechanisms for a full-body, dynamic task, it may additionally take into account the specific costs and constraints of the task.


Journal of Neurophysiology | 2010

Hierarchical Bayesian modeling and Markov chain Monte Carlo sampling for tuning-curve analysis.

Beau Cronin; Ian H. Stevenson; Mriganka Sur; Konrad P. Körding

A central theme of systems neuroscience is to characterize the tuning of neural responses to sensory stimuli or the production of movement. Statistically, we often want to estimate the parameters of the tuning curve, such as preferred direction, as well as the associated degree of uncertainty, characterized by error bars. Here we present a new sampling-based, Bayesian method that allows the estimation of tuning-curve parameters, the estimation of error bars, and hypothesis testing. This method also provides a useful way of visualizing which tuning curves are compatible with the recorded data. We demonstrate the utility of this approach using recordings of orientation and direction tuning in primary visual cortex, direction of motion tuning in primary motor cortex, and simulated data.

Collaboration


Dive into the Ian H. Stevenson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hugo L. Fernandes

Rehabilitation Institute of Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maxim Volgushev

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge