Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Duane Q. Nykamp is active.

Publication


Featured researches published by Duane Q. Nykamp.


Journal of Computational Neuroscience | 2000

A population density approach that facilitates large-scale modeling of neural networks: analysis and an application to orientation tuning.

Duane Q. Nykamp; Daniel Tranchina

We explore a computationally efficient method of simulating realistic networks of neurons introduced by Knight, Manin, and Sirovich (1996) in which integrate-and-fire neurons are grouped into large populations of similar neurons. For each population, we form a probability density that represents the distribution of neurons over all possible states. The populations are coupled via stochastic synapses in which the conductance of a neuron is modulated according to the firing rates of its presynaptic populations. The evolution equation for each of these probability densities is a partial differential-integral equation, which we solve numerically. Results obtained for several example networks are tested against conventional computations for groups of individual neurons.We apply this approach to modeling orientation tuning in the visual cortex. Our population density model is based on the recurrent feedback model of a hypercolumn in cat visual cortex of Somers et al. (1995). We simulate the response to oriented flashed bars. As in the Somers model, a weak orientation bias provided by feed-forward lateral geniculate input is transformed by intracortical circuitry into sharper orientation tuning that is independent of stimulus contrast.The population density approach appears to be a viable method for simulating large neural networks. Its computational efficiency overcomes some of the restrictions imposed by computation time in individual neuron simulations, allowing one to build more complex networks and to explore parameter space more easily. The method produces smooth rate functions with one pass of the stimulus and does not require signal averaging. At the same time, this model captures the dynamics of single-neuron activity that are missed in simple firing-rate models.


Journal of Vision | 2002

Full identification of a linear-nonlinear system via cross-correlation analysis

Duane Q. Nykamp; Dario L. Ringach

A statistical model used extensively in vision research consists of a cascade of a linear operator followed by a static (memoryless) nonlinearity. Common applications include the measurement of simple-cell receptive fields in primary visual cortex and the modeling of human performance in various psychophysical tasks. It is well known that the front-end linear filter of the model can readily be recovered, up to a multiplicative constant, using reverse-correlation techniques. However, a full identification of the model also requires an estimation of the output nonlinearity. Here, we show that for a large class of static nonlinearities, one can obtain analytical expressions for the estimates. The technique works with both Gaussian and binary noise stimuli. The applicability of the method in physiology and psychophysics is demonstrated. Finally, the proposed technique is shown to converge much faster than the currently used linear-reconstruction method.


Frontiers in Computational Neuroscience | 2011

Synchronization from Second Order Network Connectivity Statistics

Liqiong Zhao; Bryce Beverlin; Theoden I. Netoff; Duane Q. Nykamp

We investigate how network structure can influence the tendency for a neuronal network to synchronize, or its synchronizability, independent of the dynamical model for each neuron. The synchrony analysis takes advantage of the framework of second order networks, which defines four second order connectivity statistics based on the relative frequency of two-connection network motifs. The analysis identifies two of these statistics, convergent connections, and chain connections, as highly influencing the synchrony. Simulations verify that synchrony decreases with the frequency of convergent connections and increases with the frequency of chain connections. These trends persist with simulations of multiple models for the neuron dynamics and for different types of networks. Surprisingly, divergent connections, which determine the fraction of shared inputs, do not strongly influence the synchrony. The critical role of chains, rather than divergent connections, in influencing synchrony can be explained by their increasing the effective coupling strength. The decrease of synchrony with convergent connections is primarily due to the resulting heterogeneity in firing rates.


Neural Computation | 2001

A Population Density Approach That Facilitates Large-Scale Modeling of Neural Networks: Extension to Slow Inhibitory Synapses

Duane Q. Nykamp; Daniel Tranchina

A previously developed method for efficiently simulating complex networks of integrate-and-fire neurons was specialized to the case in which the neurons have fast unitary postsynaptic conductances. However, inhibitory synaptic conductances are often slower than excitatory ones for cortical neurons, and this difference can have a profound effect on network dynamics that cannot be captured with neurons that have only fast synapses. We thus extend the model to include slow inhibitory synapses. In this model, neurons are grouped into large populations of similar neurons. For each population, we calculate the evolution of a probability density function (PDF), which describes the distribution of neurons over state-space. The population firing rate is given by the flux of probability across the threshold voltage for firing an action potential. In the case of fast synaptic conductances, the PDF was one-dimensional, as the state of a neuron was completely determined by its transmembrane voltage. An exact extension to slow inhibitory synapses increases the dimension of the PDF to two or three, as the state of a neuron now includes the state of its inhibitory synaptic conductance. However, by assuming that the expected value of a neurons inhibitory conductance is independent of its voltage, we derive a reduction to a one-dimensional PDF and avoid increasing the computational complexity of the problem. We demonstrate that although this assumption is not strictly valid, the results of the reduced model are surprisingly accurate.


Journal of Neurophysiology | 2013

Directed functional connectivity matures with motor learning in a cortical pattern generator

Nancy F. Day; Kyle L. Terleski; Duane Q. Nykamp; Teresa A. Nick

Sequential motor skills may be encoded by feedforward networks that consist of groups of neurons that fire in sequence (Abeles 1991; Long et al. 2010). However, there has been no evidence of an anatomic map of activation sequence in motor control circuits, which would be potentially detectable as directed functional connectivity of coactive neuron groups. The proposed pattern generator for birdsong, the HVC (Long and Fee 2008; Vu et al. 1994), contains axons that are preferentially oriented in the rostrocaudal axis (Nottebohm et al. 1982; Stauffer et al. 2012). We used four-tetrode recordings to assess the activity of ensembles of single neurons along the rostrocaudal HVC axis in anesthetized zebra finches. We found an axial, polarized neural network in which sequential activity is directionally organized along the rostrocaudal axis in adult males, who produce a stereotyped song. Principal neurons fired in rostrocaudal order and with interneurons that were rostral to them, suggesting that groups of excitatory neurons fire at the leading edge of travelling waves of inhibition. Consistent with the synchronization of neurons by caudally travelling waves of inhibition, the activity of interneurons was more coherent in the orthogonal mediolateral axis than in the rostrocaudal axis. If directed functional connectivity within the HVC is important for stereotyped, learned song, then it may be lacking in juveniles, which sing a highly variable song. Indeed, we found little evidence for network directionality in juveniles. These data indicate that a functionally directed network within the HVC matures during sensorimotor learning and may underlie vocal patterning.


Neurocomputing | 2001

A population density method for large-scale modeling of neuronal networks with realistic synaptic kinetics

E. Haskell; Duane Q. Nykamp; Daniel Tranchina

Abstract Population density function (PDF) methods have been used as both a time-saving alternative to direct Monte-Carlo simulation of neuronal network activity and as a tool for the analytic study of neuronal networks. Computational efficiency of the PDF method is dependent on a low-dimensional state space for the underlying individual neuron. Many previous implementations have assumed that the time scale of the synaptic kinetics is very fast on the scale of the membrane time constant in order to obtain a one-dimensional state space. Here, we extend our previous PDF methods for synapses with realistic kinetics; synaptic current injection for inhibition is replaced with more realistic conductance modulation.


Journal of Computational Neuroscience | 2012

Dynamical changes in neurons during seizures determine tonic to clonic shift

Bryce Beverlin; J. Kakalios; Duane Q. Nykamp; Theoden I. Netoff

A tonic-clonic seizure transitions from high frequency asynchronous activity to low frequency coherent oscillations, yet the mechanism of transition remains unknown. We propose a shift in network synchrony due to changes in cellular response. Here we use phase-response curves (PRC) from Morris-Lecar (M-L) model neurons with synaptic depression and gradually decrease input current to cells within a network simulation. This method effectively decreases firing rates resulting in a shift to greater network synchrony illustrating a possible mechanism of the transition phenomenon. PRCs are measured from the M-L conductance based model cell with a range of input currents within the limit cycle. A large network of 3000 excitatory neurons is simulated with a network topology generated from second-order statistics which allows a range of population synchrony. The population synchrony of the oscillating cells is measured with the Kuramoto order parameter, which reveals a transition from tonic to clonic phase exhibited by our model network. The cellular response shift mechanism for the tonic-clonic seizure transition reproduces the population behavior closely when compared to EEG data.


Journal of Computational Neuroscience | 2009

A kinetic theory approach to capturing interneuronal correlation: the feed-forward case.

Chin Yueh Liu; Duane Q. Nykamp

We present an approach for using kinetic theory to capture first and second order statistics of neuronal activity. We coarse grain neuronal networks into populations of neurons and calculate the population average firing rate and output cross-correlation in response to time varying correlated input. We derive coupling equations for the populations based on first and second order statistics of the network connectivity. This coupling scheme is based on the hypothesis that second order statistics of the network connectivity are sufficient to determine second order statistics of neuronal activity. We implement a kinetic theory representation of a simple feed-forward network and demonstrate that the kinetic theory model captures key aspects of the emergence and propagation of correlations in the network, as long as the correlations do not become too strong. By analyzing the correlated activity of feed-forward networks with a variety of connectivity patterns, we provide evidence supporting our hypothesis of the sufficiency of second order connectivity statistics.


Network: Computation In Neural Systems | 2003

Measuring linear and quadratic contributions to neuronal response

Duane Q. Nykamp

We present a method to dissociate the sign-dependent (linear or odd-order) response from the sign-independent (quadratic or even-order) response of a neuron to sequences of random orthonormal stimulus elements. The method is based on a modification of the classical linear–nonlinear model of neural response. The analysis produces estimates of the stimulus features to which the neuron responds in a sign-dependent manner, the stimulus features to which the neuron responds in a sign-independent manner and the relative weight of the sign-independent response. We propose that this method could be used to characterize simple and complex cells in the primary visual cortex.


Journal of Computational Neuroscience | 2012

Searching for optimal stimuli: ascending a neuron’s response function

Melinda Evrithiki Koelling; Duane Q. Nykamp

Many methods used to analyze neuronal response assume that neuronal activity has a fundamentally linear relationship to the stimulus. However, some neurons are strongly sensitive to multiple directions in stimulus space and have a highly nonlinear response. It can be difficult to find optimal stimuli for these neurons. We demonstrate how successive linear approximations of neuronal response can effectively carry out gradient ascent and move through stimulus space towards local maxima of the response. We demonstrate search results for a simple model neuron and two models of a highly selective neuron.

Collaboration


Dive into the Duane Q. Nykamp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Albert Compte

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Alex Roxin

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liqiong Zhao

University of Minnesota

View shared research outputs
Researchain Logo
Decentralizing Knowledge