Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles F. Cadieu is active.

Publication


Featured researches published by Charles F. Cadieu.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Performance-optimized hierarchical models predict neural responses in higher visual cortex

Daniel Yamins; Ha Hong; Charles F. Cadieu; Ethan A. Solomon; Darren Seibert; James J. DiCarlo

Significance Humans and monkeys easily recognize objects in scenes. This ability is known to be supported by a network of hierarchically interconnected brain areas. However, understanding neurons in higher levels of this hierarchy has long remained a major challenge in visual systems neuroscience. We use computational techniques to identify a neural network model that matches human performance on challenging object categorization tasks. Although not explicitly constrained to match neural data, this model turns out to be highly predictive of neural responses in both the V4 and inferior temporal cortex, the top two layers of the ventral visual hierarchy. In addition to yielding greatly improved models of visual cortex, these results suggest that a process of biological performance optimization directly shaped neural mechanisms. The ventral visual stream underlies key human visual object recognition abilities. However, neural encoding in the higher areas of the ventral stream remains poorly understood. Here, we describe a modeling approach that yields a quantitatively accurate model of inferior temporal (IT) cortex, the highest ventral cortical area. Using high-throughput computational techniques, we discovered that, within a class of biologically plausible hierarchical neural network models, there is a strong correlation between a model’s categorization performance and its ability to predict individual IT neural unit response data. To pursue this idea, we then identified a high-performing neural network that matches human performance on a range of recognition tasks. Critically, even though we did not constrain this model to match neural data, its top output layer turns out to be highly predictive of IT spiking responses to complex naturalistic images at both the single site and population levels. Moreover, the model’s intermediate layers are highly predictive of neural responses in the V4 cortex, a midlevel visual area that provides the dominant cortical input to IT. These results show that performance optimization—applied in a biologically appropriate model class—can be used to build quantitative predictive models of neural processing.


PLOS Computational Biology | 2014

Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition

Charles F. Cadieu; Ha Hong; Daniel Yamins; Nicolas Pinto; Diego Ardila; Ethan A. Solomon; Najib J. Majaj; James J. DiCarlo

The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.


Proceedings of the National Academy of Sciences of the United States of America | 2010

Oscillatory phase coupling coordinates anatomically dispersed functional cell assemblies

Ryan T. Canolty; Karunesh Ganguly; Steven W. Kennerley; Charles F. Cadieu; Kilian Koepsell; Jonathan D. Wallis; Jose M. Carmena

Hebb proposed that neuronal cell assemblies are critical for effective perception, cognition, and action. However, evidence for brain mechanisms that coordinate multiple coactive assemblies remains lacking. Neuronal oscillations have been suggested as one possible mechanism for cell assembly coordination. Prior studies have shown that spike timing depends upon local field potential (LFP) phase proximal to the cell body, but few studies have examined the dependence of spiking on distal LFP phases in other brain areas far from the neuron or the influence of LFP–LFP phase coupling between distal areas on spiking. We investigated these interactions by recording LFPs and single-unit activity using multiple microelectrode arrays in several brain areas and then used a unique probabilistic multivariate phase distribution to model the dependence of spike timing on the full pattern of proximal LFP phases, distal LFP phases, and LFP–LFP phase coupling between electrodes. Here we show that spiking activity in single neurons and neuronal ensembles depends on dynamic patterns of oscillatory phase coupling between multiple brain areas, in addition to the effects of proximal LFP phase. Neurons that prefer similar patterns of phase coupling exhibit similar changes in spike rates, whereas neurons with different preferences show divergent responses, providing a basic mechanism to bind different neurons together into coordinated cell assemblies. Surprisingly, phase-coupling–based rate correlations are independent of interneuron distance. Phase-coupling preferences correlate with behavior and neural function and remain stable over multiple days. These findings suggest that neuronal oscillations enable selective and dynamic control of distributed functional cell assemblies.


Progress in Brain Research | 2007

A quantitative theory of immediate visual recognition.

Thomas Serre; Gabriel Kreiman; Minjoon Kouh; Charles F. Cadieu; Ulf Knoblich; Tomaso Poggio

Human and non-human primates excel at visual recognition tasks. The primate visual system exhibits a strong degree of selectivity while at the same time being robust to changes in the input image. We have developed a quantitative theory to account for the computations performed by the feedforward path in the ventral stream of the primate visual cortex. Here we review recent predictions by a model instantiating the theory about physiological observations in higher visual areas. We also show that the model can perform recognition tasks on datasets of complex natural images at a level comparable to psychophysical measurements on human observers during rapid categorization tasks. In sum, the evidence suggests that the theory may provide a framework to explain the first 100-150 ms of visual object recognition. The model also constitutes a vivid example of how computational models can interact with experimental observations in order to advance our understanding of a complex phenomenon. We conclude by suggesting a number of open questions, predictions, and specific experiments for visual physiology and psychophysics.


electronic imaging | 2007

Bilinear models of natural images

Bruno A. Olshausen; Charles F. Cadieu; Jack Culpepper; David K. Warland

Previous work on unsupervised learning has shown that it is possible to learn Gabor-like feature representations, similar to those employed in the primary visual cortex, from the statistics of natural images. However, such representations are still not readily suited for object recognition or other high-level visual tasks because they can change drastically as the image changes to due object motion, variations in viewpoint, lighting, and other factors. In this paper, we describe how bilinear image models can be used to learn independent representations of the invariances, and their transformations, in natural image sequences. These models provide the foundation for learning higher-order feature representations that could serve as models of higher stages of processing in the cortex, in addition to having practical merit for computer vision tasks.


Proceedings of SPIE | 2009

Learning real and complex overcomplete representations from the statistics of natural images

Bruno A. Olshausen; Charles F. Cadieu; David K. Warland

We show how an overcomplete dictionary may be adapted to the statistics of natural images so as to provide a sparse representation of image content. When the degree of overcompleteness is low, the basis functions that emerge resemble those of Gabor wavelet transforms. As the degree of overcompleteness is increased, new families of basis functions emerge, including multiscale blobs, ridge-like functions, and gratings. When the basis functions and coefficients are allowed to be complex, they provide a description of image content in terms of local amplitude (contrast) and phase (position) of features. These complex, overcomplete transforms may be adapted to the statistics of natural movies by imposing both sparseness and temporal smoothness on the amplitudes. The basis functions that emerge form Hilbert pairs such that shifting the phase of the coefficient shifts the phase of the corresponding basis function. This type of representation is advantageous because it makes explicit the structural and dynamic content of images, which in turn allows later stages of processing to discover higher-order properties indicative of image content. We demonstrate this point by showing that it is possible to learn the higher-order structure of dynamic phase - i.e., motion - from the statistics of natural image sequences.


IEEE Transactions on Biomedical Engineering | 2012

Multivariate Phase–Amplitude Cross-Frequency Coupling in Neurophysiological Signals

Ryan T. Canolty; Charles F. Cadieu; Kilian Koepsell; Robert T. Knight; Jose M. Carmena

Phase-amplitude cross-frequency coupling (CFC)-where the phase of a low-frequency signal modulates the amplitude or power of a high-frequency signal-is a topic of increasing interest in neuroscience. However, existing methods of assessing CFC are inherently bivariate and cannot estimate CFC between more than two signals at a time. Given the increase in multielectrode recordings, this is a strong limitation. Furthermore, the phase coupling between multiple low-frequency signals is likely to produce a high rate of false positives when CFC is evaluated using bivariate methods. Here, we present a novel method for estimating the statistical dependence between one high-frequency signal and N low-frequency signals, termed multivariate phase-coupling estimation (PCE). Compared to bivariate methods, the PCE produces sparser estimates of CFC and can distinguish between direct and indirect coupling between neurophysiological signals-critical for accurately estimating coupling within multiscale brain networks.


Journal of Vision | 2010

Shape Representation in V4: Investigating Position-Specific Tuning for Boundary Conformation with the Standard Model of Object Recognition

Charles F. Cadieu; Minjoon Kouh; Maximilian Riesenhuber; Tomaso Poggio

Abstract : The computational processes in the intermediate stages of the ventral pathway responsible for visual object recognition are not well understood. A recent physiological study by A. Pasupathy and C. Connor in intermediate area V4 using contour stimuli, proposes that a population of V4 neurons display object-centered, position-specific curvature tuning. The standard model of object recognition, a recently developed model to account for recognition properties of IT cells (extending classical suggestions by Hubel, Wiesel and others), is used here to model the response of the V4 cells described in Pasupathy and Connor. Our results show that a feedforward, network level mechanism can exhibit selectivity and invariance properties that correspond to the responses of the V4 cells. These results suggest how object-centered, position-specific curvature tuning of V4 cells may arise from combinations of complex V1 cell responses. Furthermore, the model makes predictions about the responses of the same V4 cells studied by Pasupathy and Connor to novel gray level patterns, such as gratings and natural images. These predictions suggest specific experiments to further explore shape representation in V4.


Journal of Neurophysiology | 2012

Detecting event-related changes of multivariate phase coupling in dynamic brain networks.

Ryan T. Canolty; Charles F. Cadieu; Kilian Koepsell; Karunesh Ganguly; Robert T. Knight; Jose M. Carmena

Oscillatory phase coupling within large-scale brain networks is a topic of increasing interest within systems, cognitive, and theoretical neuroscience. Evidence shows that brain rhythms play a role in controlling neuronal excitability and response modulation (Haider B, McCormick D. Neuron 62: 171-189, 2009) and regulate the efficacy of communication between cortical regions (Fries P. Trends Cogn Sci 9: 474-480, 2005) and distinct spatiotemporal scales (Canolty RT, Knight RT. Trends Cogn Sci 14: 506-515, 2010). In this view, anatomically connected brain areas form the scaffolding upon which neuronal oscillations rapidly create and dissolve transient functional networks (Lakatos P, Karmos G, Mehta A, Ulbert I, Schroeder C. Science 320: 110-113, 2008). Importantly, testing these hypotheses requires methods designed to accurately reflect dynamic changes in multivariate phase coupling within brain networks. Unfortunately, phase coupling between neurophysiological signals is commonly investigated using suboptimal techniques. Here we describe how a recently developed probabilistic model, phase coupling estimation (PCE; Cadieu C, Koepsell K Neural Comput 44: 3107-3126, 2010), can be used to investigate changes in multivariate phase coupling, and we detail the advantages of this model over the commonly employed phase-locking value (PLV; Lachaux JP, Rodriguez E, Martinerie J, Varela F. Human Brain Map 8: 194-208, 1999). We show that the N-dimensional PCE is a natural generalization of the inherently bivariate PLV. Using simulations, we show that PCE accurately captures both direct and indirect (network mediated) coupling between network elements in situations where PLV produces erroneous results. We present empirical results on recordings from humans and nonhuman primates and show that the PCE-estimated coupling values are different from those using the bivariate PLV. Critically on these empirical recordings, PCE output tends to be sparser than the PLVs, indicating fewer significant interactions and perhaps a more parsimonious description of the data. Finally, the physical interpretation of PCE parameters is straightforward: the PCE parameters correspond to interaction terms in a network of coupled oscillators. Forward modeling of a network of coupled oscillators with parameters estimated by PCE generates synthetic data with statistical characteristics identical to empirical signals. Given these advantages over the PLV, PCE is a useful tool for investigating multivariate phase coupling in distributed brain networks.


Journal of Vision | 2010

Learning invariant and variant components of time-varying natural images

Bruno A. Olshausen; Charles F. Cadieu

A remarkable property of biological visual systems is their ability to infer structure within the visual world. In order to infer structure, a useful representation should separate the invariant from the variant information [1, 2]. Invariant information is important for determining ‘what’ we are seeing, recognizing objects and interpreting scenes; while variant information captures the ‘where’ or ‘how’ information, the transformations of objects. It has been hypothesized that biological visual systems represent ‘what’ and ‘where’ visual information in two separate cortical processing streams [3]. How do biological systems decompose visual information into separate invariant and variant representations?

Collaboration


Dive into the Charles F. Cadieu's collaboration.

Top Co-Authors

Avatar

James J. DiCarlo

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ha Hong

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minjoon Kouh

McGovern Institute for Brain Research

View shared research outputs
Top Co-Authors

Avatar

Tomaso Poggio

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge