Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Nils Lundstrom is active.

Publication


Featured researches published by Brian Nils Lundstrom.


Nature Neuroscience | 2008

Fractional differentiation by neocortical pyramidal neurons

Brian Nils Lundstrom; Matthew H. Higgs; William J. Spain; Adrienne L. Fairhall

Neural systems adapt to changes in stimulus statistics. However, it is not known how stimuli with complex temporal dynamics drive the dynamics of adaptation and the resulting firing rate. For single neurons, it has often been assumed that adaptation has a single time scale. We found that single rat neocortical pyramidal neurons adapt with a time scale that depends on the time scale of changes in stimulus statistics. This multiple time scale adaptation is consistent with fractional order differentiation, such that the neurons firing rate is a fractional derivative of slowly varying stimulus parameters. Biophysically, even though neuronal fractional differentiation effectively yields adaptation with many time scales, we found that its implementation required only a few properly balanced known adaptive mechanisms. Fractional differentiation provides single neurons with a fundamental and general computation that can contribute to efficient information processing, stimulus anticipation and frequency-independent phase shifts of oscillatory neuronal firing.


NeuroImage | 2005

The role of precuneus and left inferior frontal cortex during source memory episodic retrieval

Brian Nils Lundstrom; Martin Ingvar; Karl Magnus Petersson

The posterior medial parietal cortex and left prefrontal cortex (PFC) have both been implicated in the recollection of past episodes. In a previous study, we found the posterior precuneus and left lateral inferior frontal cortex to be activated during episodic source memory retrieval. This study further examines the role of posterior precuneal and left prefrontal activation during episodic source memory retrieval using a similar source memory paradigm but with longer latency between encoding and retrieval. Our results suggest that both the precuneus and the left inferior PFC are important for regeneration of rich episodic contextual associations and that the precuneus activates in tandem with the left inferior PFC during correct source retrieval. Further, results suggest that the left ventro-lateral frontal region/frontal operculum is involved in searching for task-relevant information (BA 47) and subsequent monitoring or scrutiny (BA 44/45) while regions in the dorsal inferior frontal cortex are important for information selection (BA 45/46).


NeuroImage | 2003

Isolating the retrieval of imagined pictures during episodic memory: activation of the left precuneus and left prefrontal cortex

Brian Nils Lundstrom; Karl Magnus Petersson; Jesper Andersson; Mikael Johansson; Peter Fransson; Martin Ingvar

The posterior medial parietal cortex and the left prefrontal cortex have both been implicated in the recollection of past episodes. In order to clarify their functional significance, we performed this functional magnetic resonance imaging study, which employed event-related source memory and item recognition retrieval of words paired with corresponding imagined or viewed pictures. Our results suggest that episodic source memory is related to a functional network including the posterior precuneus and the left lateral prefrontal cortex. This network is activated during explicit retrieval of imagined pictures and results from the retrieval of item-context associations. This suggests that previously imagined pictures provide a context with which encoded words can be more strongly associated.


The Journal of Neuroscience | 2007

The Impact of Input Fluctuations on the Frequency–Current Relationships of Layer 5 Pyramidal Neurons in the Rat Medial Prefrontal Cortex

Maura Arsiero; Hans-Rudolf Lüscher; Brian Nils Lundstrom; Michele Giugliano

The role of irregular cortical firing in neuronal computation is still debated, and it is unclear how signals carried by fluctuating synaptic potentials are decoded by downstream neurons. We examined in vitro frequency versus current (f–I) relationships of layer 5 (L5) pyramidal cells of the rat medial prefrontal cortex (mPFC) using fluctuating stimuli. Studies in the somatosensory cortex show that L5 neurons become insensitive to input fluctuations as input mean increases and that their f–I response becomes linear. In contrast, our results show that mPFC L5 pyramidal neurons retain an increased sensitivity to input fluctuations, whereas their sensitivity to the input mean diminishes to near zero. This implies that the discharge properties of L5 mPFC neurons are well suited to encode input fluctuations rather than input mean in their firing rates, with important consequences for information processing and stability of persistent activity at the network level.


The Journal of Neuroscience | 2010

Multiple Timescale Encoding of Slowly Varying Whisker Stimulus Envelope in Cortical and Thalamic Neurons In Vivo

Brian Nils Lundstrom; Adrienne L. Fairhall; Miguel Maravall

Adaptive processes over many timescales endow neurons with sensitivity to stimulus changes over a similarly wide range of scales. Although spike timing of single neurons can precisely signal rapid fluctuations in their inputs, the mean firing rate can convey information about slower-varying properties of the stimulus. Here, we investigate the firing rate response to a slowly varying envelope of whisker motion in two processing stages of the rat vibrissa pathway. The whiskers of anesthetized rats were moved through a noise trajectory with an amplitude that was sinusoidally modulated at one of several frequencies. In thalamic neurons, we found that the rate response to the stimulus envelope was also sinusoidal, with an approximately frequency-independent phase advance with respect to the input. Responses in cortex were similar but with a phase shift that was about three times larger, consistent with a larger amount of rate adaptation. These response properties can be described as a linear transformation of the input for which a single parameter quantifies the phase shift as well as the degree of adaptation. These results are reproduced by a model of adapting neurons connected by synapses with short-term plasticity, showing that the observed linear response and phase lead can be built up from a network that includes a sequence of nonlinear adapting elements. Our study elucidates how slowly varying envelope information under passive stimulation is preserved and transformed through the vibrissa processing pathway.


Journal of Computational Neuroscience | 2009

Sensitivity of firing rate to input fluctuations depends on time scale separation between fast and slow variables in single neurons

Brian Nils Lundstrom; Michael Famulare; Larry B. Sorensen; William J. Spain; Adrienne L. Fairhall

Neuronal responses are often characterized by the firing rate as a function of the stimulus mean, or the f–I curve. We introduce a novel classification of neurons into Types A, B−, and B+ according to how f–I curves are modulated by input fluctuations. In Type A neurons, the f–I curves display little sensitivity to input fluctuations when the mean current is large. In contrast, Type B neurons display sensitivity to fluctuations throughout the entire range of input means. Type B− neurons do not fire repetitively for any constant input, whereas Type B+ neurons do. We show that Type B+ behavior results from a separation of time scales between a slow and fast variable. A voltage-dependent time constant for the recovery variable can facilitate sensitivity to input fluctuations. Type B+ firing rates can be approximated using a simple “energy barrier” model.


The Journal of Neuroscience | 2006

Decoding Stimulus Variance from a Distributional Neural Code of Interspike Intervals

Brian Nils Lundstrom; Adrienne L. Fairhall

The spiking output of an individual neuron can represent information about the stimulus via mean rate, absolute spike time, and the time intervals between spikes. Here we discuss a distinct form of information representation, the local distribution of spike intervals, and show that the time-varying distribution of interspike intervals (ISIs) can represent parameters of the statistical context of stimuli. For many sensory neural systems the mapping between the stimulus input and spiking output is not fixed but, rather, depends on the statistical properties of the stimulus, potentially leading to ambiguity. We have shown previously that for the adaptive neural code of the fly H1, a motion-sensitive neuron in the fly visual system, information about the overall variance of the signal is obtainable from the ISI distribution. We now demonstrate the decoding of information about variance and show that a distributional code of ISIs can resolve ambiguities introduced by slow spike frequency adaptation. We examine the precision of this distributional code for the representation of stimulus variance in the H1 neuron as well as in the Hodgkin–Huxley model neuron. We find that the accuracy of the decoding depends on the shapes of the ISI distributions and the speed with which they adapt to new stimulus variances.


Neural Computation | 2008

Two computational regimes of a single-compartment neuron separated by a planar boundary in conductance space

Brian Nils Lundstrom; Sungho Hong; Matthew H. Higgs; Adrienne L. Fairhall

Recent in vitro data show that neurons respond to input variance with varying sensitivities. Here we demonstrate that Hodgkin-Huxley (HH) neurons can operate in two computational regimes: one that is more sensitive to input variance (differentiating) and one that is less sensitive (integrating). A boundary plane in the 3D conductance space separates these two regimes. For a reduced HH model, this plane can be derived analytically from the V nullcline, thus suggesting a means of relating biophysical parameters to neural computation by analyzing the neurons dynamical system.


PLOS Computational Biology | 2008

Intrinsic Gain Modulation and Adaptive Neural Coding

Sungho Hong; Brian Nils Lundstrom; Adrienne L. Fairhall

In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate versus current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.


Journal of Computational Neuroscience | 2015

Modeling multiple time scale firing rate adaptation in a neural network of local field potentials

Brian Nils Lundstrom

In response to stimulus changes, the firing rates of many neurons adapt, such that stimulus change is emphasized. Previous work has emphasized that rate adaptation can span a wide range of time scales and produce time scale invariant power law adaptation. However, neuronal rate adaptation is typically modeled using single time scale dynamics, and constructing a conductance-based model with arbitrary adaptation dynamics is nontrivial. Here, a modeling approach is developed in which firing rate adaptation, or spike frequency adaptation, can be understood as a filtering of slow stimulus statistics. Adaptation dynamics are modeled by a stimulus filter, and quantified by measuring the phase leads of the firing rate in response to varying input frequencies. Arbitrary adaptation dynamics are approximated by a set of weighted exponentials with parameters obtained by fitting to a desired filter. With this approach it is straightforward to assess the effect of multiple time scale adaptation dynamics on neural networks. To demonstrate this, single time scale and power law adaptation were added to a network model of local field potentials. Rate adaptation enhanced the slow oscillations of the network and flattened the output power spectrum, dampening intrinsic network frequencies. Thus, rate adaptation may play an important role in network dynamics.

Collaboration


Dive into the Brian Nils Lundstrom's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge