Kechen Zhang
Johns Hopkins University School of Medicine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kechen Zhang.
Neural Computation | 1999
Kechen Zhang; Terrence J. Sejnowski
Sensory and motor variables are typically represented by a population of broadly tuned neurons. A coarser representation with broader tuning can often improve coding accuracy, but sometimes the accuracy may also improve with sharper tuning. The theoretical analysis here shows that the relationship between tuning width and accuracy depends crucially on the dimension of the encoded variable. A general rule is derived for how the Fisher information scales with the tuning width, regardless of the exact shape of the tuning function, the probability distribution of spikes, and allowing some correlated noise between neurons. These results demonstrate a universal dimensionality effect in neural population coding.
Neural Computation | 1998
Alexandre Pouget; Kechen Zhang; Sophie Denève; P.E. Latham
Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient (the variance of the estimate is much larger than the smallest possible variance) or biologically implausible, like maximum likelihood. Moreover, these methods attempt to compute a scalar or vector estimate of the encoded variable. Neurons are faced with a similar estimation problem. They must read out the responses of the presynaptic neurons, but, by contrast, they typically encode the variable with a further population code rather than as a scalar. We show how a nonlinear recurrent network can be used to perform estimation in a near-optimal way while keeping the estimate in a coarse code format. This work suggests that lateral connections in the cortex may be involved in cleaning up uncorrelated noise among neurons representing similar variables.
Hippocampus | 2008
Hugh T. Blair; Kishan Gupta; Kechen Zhang
As a rat navigates through a familiar environment, its position in space is encoded by firing rates of place cells and grid cells. Oscillatory interference models propose that this positional firing rate code is derived from a phase code, which stores the rats position as a pattern of phase angles between velocity‐modulated theta oscillations. Here we describe a three‐stage network model, which formalizes the computational steps that are necessary for converting phase‐coded position signals (represented by theta oscillations) into rate‐coded position signals (represented by grid cells and place cells). The first stage of the model proposes that the phase‐coded position signal is stored and updated by a bank of ring attractors, like those that have previously been hypothesized to perform angular path integration in the head‐direction cell system. We show analytically how ring attractors can serve as central pattern generators for producing velocity‐modulated theta oscillations, and we propose that such ring attractors may reside in subcortical areas where hippocampal theta rhythm is known to originate. In the second stage of the model, grid fields are formed by oscillatory interference between theta cells residing in different (but not the same) ring attractors. The models third stage assumes that hippocampal neurons generate Gaussian place fields by computing weighted sums of inputs from a basis set of many grid fields. Here we show that under this assumption, the spatial frequency spectrum of the Gaussian place field defines the vertex spacings of grid cells that must provide input to the place cell. This analysis generates a testable prediction that grid cells with large vertex spacings should send projections to the entire hippocampus, whereas grid cells with smaller vertex spacings may project more selectively to the dorsal hippocampus, where place fields are smallest.
The Journal of Neuroscience | 2007
Hugh T. Blair; Adam C. Welday; Kechen Zhang
The dorsomedial entorhinal cortex (dMEC) of the rat brain contains a remarkable population of spatially tuned neurons called grid cells (Hafting et al., 2005). Each grid cell fires selectively at multiple spatial locations, which are geometrically arranged to form a hexagonal lattice that tiles the surface of the rats environment. Here, we show that grid fields can combine with one another to form moiré interference patterns, referred to as “moiré grids,” that replicate the hexagonal lattice over an infinite range of spatial scales. We propose that dMEC grids are actually moiré grids formed by interference between much smaller “theta grids,” which are hypothesized to be the primary source of movement-related theta rhythm in the rat brain. The formation of moiré grids from theta grids obeys two scaling laws, referred to as the length and rotational scaling rules. The length scaling rule appears to account for firing properties of grid cells in layer II of dMEC, whereas the rotational scaling rule can better explain properties of layer III grid cells. Moiré grids built from theta grids can be combined to form yet larger grids and can also be used as basis functions to construct memory representations of spatial locations (place cells) or visual images. Memory representations built from moiré grids are automatically endowed with size invariance by the scaling properties of the moiré grids. We therefore propose that moiré interference between grid fields may constitute an important principle of neural computation underlying the construction of scale-invariant memory representations.
Annual Review of Neuroscience | 2012
James J. Knierim; Kechen Zhang
Attractor networks are a popular computational construct used to model different brain systems. These networks allow elegant computations that are thought to represent a number of aspects of brain function. Although there is good reason to believe that the brain displays attractor dynamics, it has proven difficult to test experimentally whether any particular attractor architecture resides in any particular brain circuit. We review models and experimental evidence for three systems in the rat brain that are presumed to be components of the rats navigational and memory system. Head-direction cells have been modeled as a ring attractor, grid cells as a plane attractor, and place cells both as a plane attractor and as a point attractor. Whereas the models have proven to be extremely useful conceptual tools, the experimental evidence in their favor, although intriguing, is still mostly circumstantial.
Current Biology | 2011
Eric T. Carlson; Russell J. Rasquinha; Kechen Zhang; Charles E. Connor
Sparse coding has long been recognized as a primary goal of image transformation in the visual system. Sparse coding in early visual cortex is achieved by abstracting local oriented spatial frequencies and by excitatory/inhibitory surround modulation. Object responses are thought to be sparse at subsequent processing stages, but neural mechanisms for higher-level sparsification are not known. Here, convergent results from macaque area V4 neural recording and simulated V4 populations trained on natural object contours suggest that sparse coding is achieved in midlevel visual cortex by emphasizing representation of acute convex and concave curvature. We studied 165 V4 neurons with a random, adaptive stimulus strategy to minimize bias and explore an unlimited range of contour shapes. V4 responses were strongly weighted toward contours containing acute convex or concave curvature. In contrast, the tuning distribution in nonsparse simulated V4 populations was strongly weighted toward low curvature. But as sparseness constraints increased, the simulated tuning distribution shifted progressively toward more acute convex and concave curvature, matching the neural recording results. These findings indicate a sparse object coding scheme in midlevel visual cortex based on uncommon but diagnostic regions of acute contour curvature.
Neural Computation | 1999
Kechen Zhang; Martin I. Sereno; Margaret E. Sereno
We previously demonstrated that it is possible to learn position-independent responses to rotation and dilation by filtering rotations and dilations with different centers through an input layer with MT-like speed and direction tuning curves and connecting them to an MST-like layer with simple Hebbian synapses (Sereno and Sereno 1991). By analyzing an idealized version of the network with broader, sinusoidal direction-tuning and linear speed-tuning, we show analytically that a Hebb rule trained with arbitrary rotation, dilation/contraction, and translation velocity fields yields units with weight fields that are a rotation plus a dilation or contraction field, and whose responses to a rotating or dilating/contracting disk are exactly position independent. Differences between the performance of this idealized model and our original model (and real MST neurons) are discussed.
Frontiers in Neural Circuits | 2013
Christopher DiMattina; Kechen Zhang
In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system identification paradigm where the experimental goal is to estimate and possibly compare sensory processing models. We discuss various theoretical and practical aspects of adaptive firing rate optimization, including optimization with stimulus space constraints, firing rate adaptation, and possible network constraints on the optimal stimulus. We consider the problem of system identification, and show how accurate estimation of non-linear models can be highly dependent on the stimulus set used to probe the network. We suggest that optimizing stimuli for accurate model estimation may make it possible to successfully identify non-linear models which are otherwise intractable, and summarize several recent studies of this type. Finally, we present a two-stage stimulus design procedure which combines the dual goals of model estimation and model comparison and may be especially useful for system identification experiments where the appropriate model is unknown beforehand. We propose that fast, on-line stimulus optimization enabled by increasing computer power can make it practical to move sensory neuroscience away from a descriptive paradigm and toward a new paradigm of real-time model estimation and comparison.
Neural Computation | 2011
Christopher DiMattina; Kechen Zhang
The stimulus-response relationship of many sensory neurons is nonlinear, but fully quantifying this relationship by a complex nonlinear model may require too much data to be experimentally tractable. Here we present a theoretical study of a general two-stage computational method that may help to significantly reduce the number of stimuli needed to obtain an accurate mathematical description of nonlinear neural responses. Our method of active data collection first adaptively generates stimuli that are optimal for estimating the parameters of competing nonlinear models and then uses these estimates to generate stimuli online that are optimal for discriminating these models. We applied our method to simple hierarchical circuit models, including nonlinear networks built on the spatiotemporal or spectral-temporal receptive fields, and confirmed that collecting data using our two-stage adaptive algorithm was far more effective for estimating and comparing competing nonlinear sensory processing models than standard nonadaptive methods using random stimuli.
Neural Computation | 2010
Christopher DiMattina; Kechen Zhang
It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.