Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karthik H. Shankar is active.

Publication


Featured researches published by Karthik H. Shankar.


The Journal of Neuroscience | 2014

A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region

Marc W. Howard; Christopher J. MacDonald; Zoran Tiganj; Karthik H. Shankar; Qian Du; Michael E. Hasselmo; Howard Eichenbaum

The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1.


Neural Computation | 2012

A scale-invariant internal representation of time

Karthik H. Shankar; Marc W. Howard

We propose a principled way to construct an internal representation of the temporal stimulus history leading up to the present moment. A set of leaky integrators performs a Laplace transform on the stimulus function, and a linear operator approximates the inversion of the Laplace transform. The result is a representation of stimulus history that retains information about the temporal sequence of stimuli. This procedure naturally represents more recent stimuli more accurately than less recent stimuli; the decrement in accuracy is precisely scale invariant. This procedure also yields time cells that fire at specific latencies following the stimulus with a scale-invariant temporal spread. Combined with a simple associative memory, this representation gives rise to a moment-to-moment prediction that is also scale invariant in time. We propose that this scale-invariant representation of temporal stimulus history could serve as an underlying representation accessible to higher-level behavioral and cognitive mechanisms. In order to illustrate the potential utility of this scale-invariant representation in a variety of fields, we sketch applications using minimal performance functions to problems in classical conditioning, interval timing, scale-invariant learning in autoshaping, and the persistence of the recency effect in episodic memory across timescales.


Hippocampus | 2012

Ensembles of human MTL neurons “jump back in time” in response to a repeated stimulus

Marc W. Howard; Indre V. Viskontas; Karthik H. Shankar; Itzhak Fried

Episodic memory, which depends critically on the integrity of the medial temporal lobe (MTL), has been described as “mental time travel” in which the rememberer “jumps back in time.” The neural mechanism underlying this ability remains elusive. Mathematical and computational models of performance in episodic memory tasks provide a specific hypothesis regarding the computation that supports such a jump back in time. The models suggest that a representation of temporal context, a representation that changes gradually over macroscopic periods of time, is the cue for episodic recall. According to these models, a jump back in time corresponds to a stimulus recovering a prior state of temporal context. In vivo single‐neuron recordings were taken from the human MTL while epilepsy patients distinguished novel from repeated images in a continuous recognition memory task. The firing pattern of the ensemble of MTL neurons showed robust temporal autocorrelation over macroscopic periods of time during performance of the memory task. The gradually‐changing part of the ensemble state was causally affected by the visual stimulus being presented. Critically, repetition of a stimulus caused the ensemble to elicit a pattern of activity that resembled the pattern of activity present before the initial presentation of the stimulus. These findings confirm a direct prediction of this class of temporal context models and may be a signature of the mechanism that underlies the experience of episodic memory as mental time travel.


Brain Research | 2010

Timing using temporal context.

Karthik H. Shankar; Marc W. Howard

We present a memory model that explicitly constructs and stores the temporal information about when a stimulus was encountered in the past. The temporal information is constructed from a set of temporal context vectors adapted from the temporal context model (TCM). These vectors are leaky integrators that could be constructed from a population of persistently firing cells. An array of temporal context vectors with different decay rates calculates the Laplace transform of real time events. Simple bands of feedforward excitatory and inhibitory connections from these temporal context vectors enable another population of cells, timing cells. These timing cells approximately reconstruct the entire temporal history of past events. The temporal representation of events farther in the past is less accurate than for more recent events. This history-reconstruction procedure, which we refer to as timing from inverse Laplace transform (TILT), displays a scalar property with respect to the accuracy of reconstruction. When incorporated into a simple associative memory framework, we show that TILT predicts well-timed peak responses and the Weber law property, like that observed in interval timing tasks and classical conditioning experiments.


Neural Computation | 2016

Neural mechanism to simulate a scale-invariant future

Karthik H. Shankar; Inder Singh; Marc W. Howard

Predicting the timing and order of future events is an essential feature of cognition in higher life forms. We propose a neural mechanism to nondestructively translate the current state of spatiotemporal memory into the future, so as to construct an ordered set of future predictions almost instantaneously. We hypothesize that within each cycle of hippocampal theta oscillations, the memory state is swept through a range of translations to yield an ordered set of future predictions through modulations in synaptic connections. Theoretically, we operationalize critical neurobiological findings from hippocampal physiology in terms of neural network equations representing spatiotemporal memory. Combined with constraints based on physical principles requiring scale invariance and coherence in translation across memory nodes, the proposition results in Weber-Fechner spacing for the representation of both past (memory) and future (prediction) timelines. We show that the phenomenon of phase precession of neurons in the hippocampus and ventral striatum correspond to the cognitive act of future prediction.


Physical Review D | 2012

Metric theory of gravity with torsion in an extra dimension

Karthik H. Shankar; Anand Balaraman; Kameshwar C. Wali

We consider a theory of gravity with a hidden extra-dimension and metric-dependent torsion. A set of physically motivated constraints are imposed on the geometry so that the torsion stays confined to the extra-dimension and the extra-dimension stays hidden at the level of four dimensional geodesic motion. At the kinematic level, the theory maps on to General Relativity, but the dynamical field equations that follow from the action principle deviate markedly from the standard Einstein equations. We study static spherically symmetric vacuum solutions and homogeneous-isotropic cosmological solutions that emerge from the field equations. In both cases, we find solutions of significant physical interest. Most notably, we find positive mass solutions with naked singularity that match the well known Schwarzschild solution at large distances but lack an event horizon. In the cosmological context, we find oscillatory scenario in contrast to the inevitable, singular big bang of the standard cosmology.


arXiv: Neurons and Cognition | 2015

Generic Construction of Scale-Invariantly Coarse Grained Memory

Karthik H. Shankar

Encoding temporal information from the recent past as spatially distributed activations is essential in order for the entire recent past to be simultaneously accessible. Any biological or synthetic agent that relies on the past to predict/plan the future, would be endowed with such a spatially distributed temporal memory. Simplistically, we would expect that resource limitations would demand the memory system to store only the most useful information for future prediction. For natural signals in real world which show scale free temporal fluctuations, the predictive information encoded in memory is maximal if the past information is scale invariantly coarse grained. Here we examine the general mechanism to construct a scale invariantly coarse grained memory system. Remarkably, the generic construction is equivalent to encoding the linear combinations of Laplace transform of the past information and their approximated inverses. This reveals a fundamental construction constraint on memory networks that attempt to maximize predictive information storage relevant to the natural world.


Modern Physics Letters A | 2010

KALUZA–KLEIN THEORY WITH TORSION CONFINED TO THE EXTRA DIMENSION

Karthik H. Shankar; Kameshwar C. Wali

Here we consider a variant of the five-dimensional Kaluza–Klein (KK) theory within the framework of Einstein–Cartan formalism that includes torsion. By imposing a set of constraints on torsion and Ricci rotation coefficients, we show that the torsion components are completely expressed in terms of the metric. Moreover, the Ricci tensor in 5D corresponds exactly to what one would obtain from torsion-free general relativity on a 4D hypersurface. The contributions of the scalar and vector fields of the standard KK theory to the Ricci tensor and the affine connections are completely nullified by the contributions from torsion. As a consequence, geodesic motions do not distinguish the torsion free 4D spacetime from a hypersurface of 5D spacetime with torsion satisfying the constraints. Since torsion is not an independent dynamical variable in this formalism, the modified Einstein equations are different from those in the general Einstein–Cartan theory. This leads to important cosmological consequences such as the emergence of cosmic acceleration.


Psychological Review | 2017

Neural Scaling Laws for an Uncertain World.

Marc W. Howard; Karthik H. Shankar

Autonomous neural systems must efficiently process information in a wide range of novel environments which may have very different statistical properties. We consider the problem of how to optimally distribute receptors along a 1-dimensional continuum consistent with the following design principles. First, neural representations of the world should obey a neural uncertainty principle—making as few assumptions as possible about the statistical structure of the world. Second, neural representations should convey, as much as possible, equivalent information about environments with different statistics. The results of these arguments resemble the structure of the visual system and provide a natural explanation of the behavioral Weber-Fechner law, a foundational result in psychology. Because the derivation is extremely general, this suggests that similar scaling relationships should be observed not only in sensory continua, but also in neural representations of “cognitive” 1-dimensional quantities such as time or numerosity.


Topics in Cognitive Science | 2014

Quantum random walks and decision making.

Karthik H. Shankar

How realistic is it to adopt a quantum random walk model to account for decisions involving two choices? Here, we discuss the neural plausibility and the effect of initial state and boundary thresholds on such a model and contrast it with various features of the classical random walk model of decision making.

Collaboration


Dive into the Karthik H. Shankar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge