Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles C. Peck is active.

Publication


Featured researches published by Charles C. Peck.


Ibm Journal of Research and Development | 2008

Identifying, tabulating, and analyzing contacts between branched neuron morphologies

James R. Kozloski; Konstantinos Sfyrakis; Sean L. Hill; Felix Schürmann; Charles C. Peck; Henry Markram

Simulating neural tissue requires the construction of models of the anatomical structure and physiological function of neural microcircuitry. The Blue Brain Project is simulating the microcircuitry of a neocortical column with very high structural and physiological precision. This paper describes how we model anatomical structure by identfying, tabulating, and analyzing contacts between 104 neurons in a morphologically precise model of a column. A contact occurs when one element touches another, providing the opportunity for the subsequent creation of a simulated synapse. The architecture of our application divides the problem of detecting and analyzing contacts among thousands of processors on the IBM Blue Gene/L™ supercomputer. Data required for contact tabulation is encoded with geometrical data for contact detection and is exchanged among processors. Each processor selects a subset of neurons and then iteratively 1) divides the number of points that represents each neuron among column subvolumes, 2) detects contacts in a subvolume, 3) tabulates arbitrary categories of local contacts, 4) aggregates and analyzes global contacts, and 5) revises the contents of a column to achieve a statistical objective. Computing, analyzing, and optimizing local data in parallel across distributed global data objects involve problems common to other domains (such as three-dimensional image processing and registration). Thus, we discuss the generic nature of the application architecture.


IEEE Transactions on Neural Networks | 2008

Unsupervised Segmentation With Dynamical Units

A. Ravishankar Rao; Guillermo A. Cecchi; Charles C. Peck; James R. Kozloski

In this paper, we present a novel network to separate mixtures of inputs that have been previously learned. A significant capability of the network is that it segments the components of each input object that most contribute to its classification. The network consists of amplitude-phase units that can synchronize their dynamics, so that separation is determined by the amplitude of units in an output layer, and segmentation by phase similarity between input and output layer units. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. Moreover, efficient segmentation can be achieved even when there is considerable superposition of the inputs. The network dynamics are derived from an objective function that rewards sparse coding in the generalized amplitude-phase variables. We argue that this objective function can provide a possible formal interpretation of the binding problem and that the implementation of the network architecture and dynamics is biologically plausible.


international symposium on neural networks | 2008

Efficient segmentation in multi-layer oscillatory networks

A. Ravishankar Rao; Guillermo A. Cecchi; Charles C. Peck; James R. Kozloski

In earlier work, we derived the dynamical behavior of a network of oscillatory units described by the amplitude and phase of oscillations. The dynamics were derived from an objective function that rewards both the faithfulness and the sparseness of representation. After unsupervised learning, the network is capable of separating mixtures of inputs, and also segmenting the inputs into components that most contribute to the classification of a given input object. In the current paper, we extend our analysis to multi-layer networks, and demonstrate that the dynamical equations derived earlier can be successfully applied to multi-layer networks. The topological connectivity between the different layers are derived from biological observations in primate visual cortex, and consist of receptive fields that are topographically mapped between layers. We explore the role of feedback connections, and show that increasing the diffusivity of feedback connections significantly improves segmentation performance, but does not affect separation performance.


human vision and electronic imaging conference | 2005

A model of the formation of a self-organized cortical representation of color

A. Ravishankar Rao; Guillermo A. Cecchi; Charles C. Peck; James R. Kozloski

In this paper we address the problem of understanding the cortical processing of color information. Unravelling the cortical representation of color is a difficult task, as the neural pathways for color processing have not been fully mapped, and there are few computational modelling efforts devoted to color. Hence, we first present a conjecture for an ideal target color map based on principles of color opponency, and constraints such as retinotopy and the two dimensional nature of the map. We develop a computational model for the cortical processing of color information that seeks to produce this target color map in a self-organized manner. The input model consists of a luminance channel and opponent color channels, comprising red-green and blue-yellow signals. We use an optional stage consisting of applying an antagonistic center-surround filter to these channels. The input is projected to a restricted portion of the cortical network in a topographic way. The units in the cortical map receive the color opponent input, and compete amongst each other to represent the input. This competition is carried out through the determination of a local winner. By simulating a self-organizing map for color according to this scheme, we are largely able to achieve the desired target color map. According to recent neurophysiological findings, there is evidence for the representation of color mixtures in the cortex, which is consistent with our model. Furthermore, an orderly traversal of stimulus hues in the CIE chromaticity map correspond to an orderly spatial traversal in the primate cortical area V2. Our experimental results are also consistent with this biological observation.


international conference on computational science | 2003

Simulation infrastructure for modeling large scale neural systems

Charles C. Peck; James R. Kozloski; A. Ravishankar Rao; Guillermo A. Cecchi

This paper describes the Large-scale Edge Node Simulator, a problem solving environment for the implementation of large scale models of neural systems. This work was motivated by the absence of adequate modeling tools for this domain. The object-oriented Large-scale Edge Node Simulator was developed after a rigorous requirements analysis for this class of simulations. An example use of this environment for a complex neural simulation of cortical plasticity is presented. It is shown that the Large-scale Edge Node Simulator is capable of meeting the challenges of simulating large scale models of neural systems.


Archive | 2005

A Biologically Motivated Classifier that Preserves Implicit Relationship Information in Layered Networks

Charles C. Peck; James R. Kozloski; Guillermo A. Cecchi; A. Ravishankar Rao

A fundamental problem with layered neural networks is the loss of information about the relationships among features in the input space and relationships inferred by higher order classifiers. Information about these relationships is required to solve problems such as discrimination of simultaneously presented objects and discrimination of feature components. We propose a biologically motivated model for a classifier that preserves this information. When composed into classification networks, we show that the classifier propagates and aggregates information about feature relationships. We discuss how the model should be capable of segregating this information for the purpose of object discrimination and aggregating multiple feature components for the purpose of feature component discrimination.


Bio-Inspired Computing and Communication | 2008

Network-Related Challenges and Insights from Neuroscience

Charles C. Peck; James R. Kozloski; Guillermo A. Cecchi; Sean Hill; Felix Schürmann; Henry Markram; Ravi Rao

At nearly every spatio-temporal scale and level of integration, the brain may be studied as a network of nearly unrivaled complexity. The network perspective provides valuable insights into the structure and function of the brain. In turn, the structure and function of the brain provide insights into the nature and capabilities of networks. As a consequence, neuroscience provides a rich offering of network-related challenges and insights for those designing networks to solve complex problems. This paper explores techniques for extracting and characterizing the networks of the brain, classification of brain function based on networks derived from fMRI, and specific challenges, such as the disambiguation of classification network representations, and functional self-organization of cortical networks. This exploration visits theory and data driven neural system modeling validated respectively by capabilities and biological experiments, analysis of biological data, and theoretical analysis of static networks. Finally, techniques that build upon the network perspective are presented.


international symposium on neural networks | 2007

Emergence of Topographic Cortical Maps in a Parameterless Local Competition Network

A. Ravishankar Rao; Guillermo A. Cecchi; Charles C. Peck; James R. Kozloski

A major research problem in the area of unsupervised learning is the understanding of neuronal selectivity, and its role in the formation of cortical maps. Kohonen devised a self-organizing map algorithm to investigate this problem, which achieved partial success in replicating biological observations. However, a problem in using Kohonens approach is that it does not address the stability-plasticity dilemma, as the learning rate decreases monotonically. In this paper, we propose a solution to cortical map formation which tackles the stability-plasticity problem, where the map maintains stability while enabling plasticity in the presence of changing input statistics. We adapt the parameterless SOM (Berglund and Sitte 2006) and also modify Kohonens original approach to allow local competition in a larger cortex, where multiple winners can exist. The learning rate and neighborhood size of the modified Kohonens method are set automatically based on the error between the local winners weight vector and its input. We used input images consisting of lines of random orientation to train the system in an unsupervised manner. Our model shows large scale topographic organization of orientation across the cortex, which compares favorably with cortical maps measured in visual area V1 in primates. Furthermore, we demonstrate the plasticity of this map by showing that the map reorganizes when the input statistics are chanaged.


electronic imaging | 2006

Inference and segmentation in cortical processing

Yuan Liu; Guillermo A. Cecchi; A. Ravishankar Rao; James R. Kozloski; Charles C. Peck

We present a modelling framework for cortical processing aimed at understanding how, maintaining biological plausibility, neural network models can: (a) approximate general inference algorithms like belief propagation, combining bottom-up and top-down information, (b) solve Rosenblatts classical superposition problem, which we link to the binding problem, and (c) do so based on an unsupervised learning approach. The framework leads to two related models: the first model shows that the use of top-down feedback significantly improves the networks ability to perform inference of corrupted inputs; the second model, including oscillatory behavior in the processing units, shows that the superposition problem can be efficiently solved based on the units phases.


electronic imaging | 2006

Translation invariance in a network of oscillatory units

A. Ravishankar Rao; Guillermo A. Cecchi; Charles C. Peck; James R. Kozloski

One of the important features of the human visual system is that it is able to recognize objects in a scale and translational invariant manner. However, achieving this desirable behavior through biologically realistic networks is a challenge. The synchronization of neuronal firing patterns has been suggested as a possible solution to the binding problem (where a biological mechanism is sought to explain how features that represent an object can be scattered across a network, and yet be unified). This observation has led to neurons being modeled as oscillatory dynamical units. It is possible for a network of these dynamical units to exhibit synchronized oscillations under the right conditions. These network models have been applied to solve signal deconvolution or blind source separation problems. However, the use of the same network to achieve properties that the visual sytem exhibits, such as scale and translational invariance have not been fully explored. Some approaches investigated in the literature (Wallis, 1996) involve the use of non-oscillatory elements that are arranged in a hierarchy of layers. The objects presented are allowed to move, and the network utilizes a trace learning rule, where a time averaged output value is used to perform Hebbian learning with respect to the input value. This is a modification of the standard Hebbian learning rule, which typically uses instantaneous values of the input and output. In this paper we present a network of oscillatory amplitude-phase units connected in two layers. The types of connections include feedforward, feedback and lateral. The network consists of amplitude-phase units that can exhibit synchronized oscillations. We have previously shown that such a network can segment the components of each input object that most contribute to its classification. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. We extend the ability of this network to address the problem of translational invariance. We show that by adopting a specific treatment of the phase values of the output layer, the network exhibits translational invariant object representation. The scheme used in training is as follows. The network is presented with an input, which then moves. During the motion the amplitude and phase of the upper layer units is not reset, but continues with the past value before the introduction of the object in the new position. Only the input layer is changed instantaneously to reflect the moving object. The network behavior is such that it categorizes the translated objects with the same label as the stationary object, thus establishing an invariant categorization with respect to translation. This is a promising result as it uses the same framework of oscillatory units that achieves synchrony, and introduces motion to achieve translational invariance.

Collaboration


Dive into the Charles C. Peck's collaboration.

Researchain Logo
Decentralizing Knowledge