Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeremy R. Manning is active.

Publication


Featured researches published by Jeremy R. Manning.


The Journal of Neuroscience | 2009

Broadband Shifts in Local Field Potential Power Spectra Are Correlated with Single-Neuron Spiking in Humans

Jeremy R. Manning; Joshua J. Jacobs; Itzhak Fried; Michael J. Kahana

A fundamental question in neuroscience concerns the relation between the spiking of individual neurons and the aggregate electrical activity of neuronal ensembles as seen in local field potentials (LFPs). Because LFPs reflect both spiking activity and subthreshold events, this question is not simply one of data aggregation. Recording from 20 neurosurgical patients, we directly examined the relation between LFPs and neuronal spiking. Examining 2030 neurons in widespread brain regions, we found that firing rates were positively correlated with broadband (2–150 Hz) shifts in the LFP power spectrum. In contrast, narrowband oscillations correlated both positively and negatively with firing rates at different recording sites. Broadband power shifts were a more reliable predictor of neuronal spiking than narrowband power shifts. These findings suggest that broadband LFP power provides valuable information concerning neuronal activity beyond that contained in narrowband oscillations.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Oscillatory patterns in temporal lobe reveal context reinstatement during memory search

Jeremy R. Manning; Sean M. Polyn; Gordon H. Baltuch; Brian Litt; Michael J. Kahana

Psychological theories of memory posit that when people recall a past event, they not only recover the features of the event itself, but also recover information associated with other events that occurred nearby in time. The events surrounding a target event, and the thoughts they evoke, may be considered to represent a context for the target event, helping to distinguish that event from similar events experienced at different times. The ability to reinstate this contextual information during memory search has been considered a hallmark of episodic, or event-based, memory. We sought to determine whether context reinstatement may be observed in electrical signals recorded from the human brain during episodic recall. Analyzing electrocorticographic recordings taken as 69 neurosurgical patients studied and recalled lists of words, we uncovered a neural signature of context reinstatement. Upon recalling a studied item, we found that the recorded patterns of brain activity were not only similar to the patterns observed when the item was studied, but were also similar to the patterns observed during study of neighboring list items, with similarity decreasing reliably with positional distance. The degree to which individual patients displayed this neural signature of context reinstatement was correlated with their tendency to recall neighboring list items successively. These effects were particularly strong in temporal lobe recordings. Our findings show that recalling a past event evokes a neural signature of the temporal context in which the event occurred, thus pointing to a neural basis for episodic memory.


PLOS ONE | 2014

Topographic Factor Analysis: A Bayesian Model for Inferring Brain Networks from Neural Data

Jeremy R. Manning; Rajesh Ranganath; Kenneth A. Norman; David M. Blei

The neural patterns recorded during a neuroscientific experiment reflect complex interactions between many brain regions, each comprising millions of neurons. However, the measurements themselves are typically abstracted from that underlying structure. For example, functional magnetic resonance imaging (fMRI) datasets comprise a time series of three-dimensional images, where each voxel in an image (roughly) reflects the activity of the brain structure(s)–located at the corresponding point in space–at the time the image was collected. FMRI data often exhibit strong spatial correlations, whereby nearby voxels behave similarly over time as the underlying brain structure modulates its activity. Here we develop topographic factor analysis (TFA), a technique that exploits spatial correlations in fMRI data to recover the underlying structure that the images reflect. Specifically, TFA casts each brain image as a weighted sum of spatial functions. The parameters of those spatial functions, which may be learned by applying TFA to an fMRI dataset, reveal the locations and sizes of the brain structures activated while the data were collected, as well as the interactions between those structures.


Memory | 2012

Interpreting semantic clustering effects in free recall

Jeremy R. Manning; Michael J. Kahana

The order in which participants choose to recall words from a studied list of randomly selected words provides insights into how memories of the words are represented, organised, and retrieved. One pervasive finding is that when a pair of semantically related words (e.g., “cat” and “dog”) is embedded in the studied list, the related words are often recalled successively. This tendency to successively recall semantically related words is termed semantic clustering (Bousfield, 1953; Bousfield & Sedgewick, 1944; Cofer, Bruce, & Reicher, 1966). Measuring semantic clustering effects requires making assumptions about which words participants consider to be similar in meaning. However, it is often difficult to gain insights into individual participants’ internal semantic models, and for this reason researchers typically rely on standardised semantic similarity metrics. Here we use simulations to gain insights into the expected magnitudes of semantic clustering effects given systematic differences between participants’ internal similarity models and the similarity metric used to quantify the degree of semantic clustering. Our results provide a number of useful insights into the interpretation of semantic clustering effects in free recall.


Visual Neuroscience | 2009

Optimal design of photoreceptor mosaics : Why we do not see color at night

Jeremy R. Manning; David H. Brainard

While color vision mediated by rod photoreceptors in dim light is possible (Kelber & Roth, 2006), most animals, including humans, do not see in color at night. This is because their retinas contain only a single class of rod photoreceptors. Many of these same animals have daylight color vision, mediated by multiple classes of cone photoreceptors. We develop a general formulation, based on Bayesian decision theory, to evaluate the efficacy of various retinal photoreceptor mosaics. The formulation evaluates each mosaic under the assumption that its output is processed to optimally estimate the image. It also explicitly takes into account the statistics of the environmental image ensemble. Using the general formulation, we consider the trade-off between monochromatic and dichromatic retinal designs as a function of overall illuminant intensity. We are able to demonstrate a set of assumptions under which the prevalent biological pattern represents optimal processing. These assumptions include an image ensemble characterized by high correlations between image intensities at nearby locations, as well as high correlations between intensities in different wavelength bands. They also include a constraint on receptor photopigment biophysics and/or the information carried by different wavelengths that produces an asymmetry in the signal-to-noise ratio of the output of different receptor classes. Our results thus provide an optimality explanation for the evolution of color vision for daylight conditions and monochromatic vision for nighttime conditions. An additional result from our calculations is that regular spatial interleaving of two receptor classes in a dichromatic retina yields performance superior to that of a retina where receptors of the same class are clumped together.


PLOS Computational Biology | 2014

Unsupervised learning of cone spectral classes from natural images.

Noah C. Benson; Jeremy R. Manning; David H. Brainard

The first step in the evolution of primate trichromatic color vision was the expression of a third cone class not present in ancestral mammals. This observation motivates a fundamental question about the evolution of any sensory system: how is it possible to detect and exploit the presence of a novel sensory class? We explore this question in the context of primate color vision. We present an unsupervised learning algorithm capable of both detecting the number of spectral cone classes in a retinal mosaic and learning the class of each cone using the inter-cone correlations obtained in response to natural image input. The algorithms ability to classify cones is in broad agreement with experimental evidence about functional color vision for a wide range of mosaic parameters, including those characterizing dichromacy, typical trichromacy, anomalous trichromacy, and possible tetrachromacy.


Psychonomic Bulletin & Review | 2016

A neural signature of contextually mediated intentional forgetting

Jeremy R. Manning; Justin C. Hulbert; Jamal Williams; Luis Piloto; Lili Sahakyan; Kenneth A. Norman

The mental context in which we experience an event plays a fundamental role in how we organize our memories of an event (e.g. in relation to other events) and, in turn, how we retrieve those memories later. Because we use contextual representations to retrieve information pertaining to our past, processes that alter our representations of context can enhance or diminish our capacity to retrieve particular memories. We designed a functional magnetic resonance imaging (fMRI) experiment to test the hypothesis that people can intentionally forget previously experienced events by changing their mental representations of contextual information associated with those events. We had human participants study two lists of words, manipulating whether they were told to forget (or remember) the first list prior to studying the second list. We used pattern classifiers to track neural patterns that reflected contextual information associated with the first list and found that, consistent with the notion of contextual change, the activation of the first-list contextual representation was lower following a forget instruction than a remember instruction. Further, the magnitude of this neural signature of contextual change was negatively correlated with participants’ abilities to later recall items from the first list.


international conference on big data | 2016

Enabling factor analysis on thousand-subject neuroimaging datasets

Michael J. Anderson; Mihai Capota; Javier S. Turek; Xia Zhu; Theodore L. Willke; Yida Wang; Po-Hsuan Chen; Jeremy R. Manning; Peter J. Ramadge; Kenneth A. Norman

The scale of functional magnetic resonance image data is rapidly increasing as large multi-subject datasets are becoming widely available and high-resolution scanners are adopted. The inherent low-dimensionality of the information in this data has led neuroscientists to consider factor analysis methods to extract and analyze the underlying brain activity. In this work, we consider two recent multi-subject factor analysis methods: the Shared Response Model and the Hierarchical Topographic Factor Analysis. We perform analytical, algorithmic, and code optimization to enable multi-node parallel implementations to scale. Single-node improvements result in 99χ and 2062x speedups on the two methods, and enables the processing of larger datasets. Our distributed implementations show strong scaling of 3.3x and 5.5χ respectively with 20 nodes on real datasets. We demonstrate weak scaling on a synthetic dataset with 1024 subjects, equivalent in size to the biggest fMRI dataset collected until now, on up to 1024 nodes and 32,768 cores.


international workshop on pattern recognition in neuroimaging | 2014

Hierarchical topographic factor analysis

Jeremy R. Manning; Rajesh Ranganath; Waitsang Keung; Nicholas B. Turk-Browne; Jonathan D. Cohen; Kenneth A. Norman; David M. Blei

Recent work has revealed that cognitive processes are often reflected in patterns of functional connectivity throughout the brain (for review see [16]). However, examining functional connectivity patterns using traditional methods carries a substantial computational burden (of computing time and memory). Here we present a technique, termed Hierarchical topographic factor analysis, for efficiently discovering brain networks in large multi-subject neuroimaging datasets.


NeuroImage | 2018

A probabilistic approach to discovering dynamic full-brain functional connectivity patterns

Jeremy R. Manning; Xia Zhu; Theodore L. Willke; Rajesh Ranganath; Kimberly Stachenfeld; Uri Hasson; David M. Blei; Kenneth A. Norman

ABSTRACT Recent research shows that the covariance structure of functional magnetic resonance imaging (fMRI) data – commonly described as functional connectivity – can change as a function of the participants cognitive state (for review see Turk‐Browne, 2013). Here we present a Bayesian hierarchical matrix factorization model, termed hierarchical topographic factor analysis (HTFA), for efficiently discovering full‐brain networks in large multi‐subject neuroimaging datasets. HTFA approximates each subjects network by first re‐representing each brain image in terms of the activities of a set of localized nodes, and then computing the covariance of the activity time series of these nodes. The number of nodes, along with their locations, sizes, and activities (over time) are learned from the data. Because the number of nodes is typically substantially smaller than the number of fMRI voxels, HTFA can be orders of magnitude more efficient than traditional voxel‐based functional connectivity approaches. In one case study, we show that HTFA recovers the known connectivity patterns underlying a collection of synthetic datasets. In a second case study, we illustrate how HTFA may be used to discover dynamic full‐brain activity and connectivity patterns in real fMRI data, collected as participants listened to a story. In a third case study, we carried out a similar series of analyses on fMRI data collected as participants viewed an episode of a television show. In these latter case studies, we found that the HTFA‐derived activity and connectivity patterns can be used to reliably decode which moments in the story or show the participants were experiencing. Further, we found that these two classes of patterns contained partially non‐overlapping information, such that decoders trained on combinations of activity‐based and dynamic connectivity‐based features performed better than decoders trained on activity or connectivity patterns alone. We replicated this latter result with two additional (previously developed) methods for efficiently characterizing full‐brain activity and connectivity patterns. HIGHLIGHTSHierarchical Topographic Factor Analysis identifies full‐brain activity and network dynamics in multi‐subject brain data.We applied HTFA to fMRI data and found that activity and connectivity patterns reflect story and movie timing information.Activity and connectivity patterns contain partially non‐overlapping information about when in a story or movie participants are experiencing.

Collaboration


Dive into the Jeremy R. Manning's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. Kahana

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David H. Brainard

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge