Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James R. Kozloski is active.

Publication


Featured researches published by James R. Kozloski.


Frontiers in Neuroinformatics | 2013

Self-referential forces are sufficient to explain different dendritic morphologies

Heraldo Memelli; Benjamin Torben-Nielsen; James R. Kozloski

Dendritic morphology constrains brain activity, as it determines first which neuronal circuits are possible and second which dendritic computations can be performed over a neurons inputs. It is known that a range of chemical cues can influence the final shape of dendrites during development. Here, we investigate the extent to which self-referential influences, cues generated by the neuron itself, might influence morphology. To this end, we developed a phenomenological model and algorithm to generate virtual morphologies, which are then compared to experimentally reconstructed morphologies. In the model, branching probability follows a Galton–Watson process, while the geometry is determined by “homotypic forces” exerting influence on the direction of random growth in a constrained space. We model three such homotypic forces, namely an inertial force based on membrane stiffness, a soma-oriented tropism, and a force of self-avoidance, as directional biases in the growth algorithm. With computer simulations we explored how each bias shapes neuronal morphologies. We show that based on these principles, we can generate realistic morphologies of several distinct neuronal types. We discuss the extent to which homotypic forces might influence real dendritic morphologies, and speculate about the influence of other environmental cues on neuronal shape and circuitry.


Ibm Journal of Research and Development | 2008

Identifying, tabulating, and analyzing contacts between branched neuron morphologies

James R. Kozloski; Konstantinos Sfyrakis; Sean L. Hill; Felix Schürmann; Charles C. Peck; Henry Markram

Simulating neural tissue requires the construction of models of the anatomical structure and physiological function of neural microcircuitry. The Blue Brain Project is simulating the microcircuitry of a neocortical column with very high structural and physiological precision. This paper describes how we model anatomical structure by identfying, tabulating, and analyzing contacts between 104 neurons in a morphologically precise model of a column. A contact occurs when one element touches another, providing the opportunity for the subsequent creation of a simulated synapse. The architecture of our application divides the problem of detecting and analyzing contacts among thousands of processors on the IBM Blue Gene/L™ supercomputer. Data required for contact tabulation is encoded with geometrical data for contact detection and is exchanged among processors. Each processor selects a subset of neurons and then iteratively 1) divides the number of points that represents each neuron among column subvolumes, 2) detects contacts in a subvolume, 3) tabulates arbitrary categories of local contacts, 4) aggregates and analyzes global contacts, and 5) revises the contents of a column to achieve a statistical objective. Computing, analyzing, and optimizing local data in parallel across distributed global data objects involve problems common to other domains (such as three-dimensional image processing and registration). Thus, we discuss the generic nature of the application architecture.


Frontiers in Neural Circuits | 2010

A Theory of Loop Formation and Elimination by Spike Timing-Dependent Plasticity

James R. Kozloski; Guillermo A. Cecchi

We show that the local spike timing-dependent plasticity (STDP) rule has the effect of regulating the trans-synaptic weights of loops of any length within a simulated network of neurons. We show that depending on STDPs polarity, functional loops are formed or eliminated in networks driven to normal spiking conditions by random, partially correlated inputs, where functional loops comprise synaptic weights that exceed a positive threshold. We further prove that STDP is a form of loop-regulating plasticity for the case of a linear network driven by noise. Thus a notable local synaptic learning rule makes a specific prediction about synapses in the brain in which standard STDP is present: that under normal spiking conditions, they should participate in predominantly feed-forward connections at all scales. Our model implies that any deviations from this prediction would require a substantial modification to the hypothesized role for standard STDP. Given its widespread occurrence in the brain, we predict that STDP could also regulate long range functional loops among individual neurons across all brain scales, up to, and including, the scale of global brain network topology.


IEEE Transactions on Neural Networks | 2008

Unsupervised Segmentation With Dynamical Units

A. Ravishankar Rao; Guillermo A. Cecchi; Charles C. Peck; James R. Kozloski

In this paper, we present a novel network to separate mixtures of inputs that have been previously learned. A significant capability of the network is that it segments the components of each input object that most contribute to its classification. The network consists of amplitude-phase units that can synchronize their dynamics, so that separation is determined by the amplitude of units in an output layer, and segmentation by phase similarity between input and output layer units. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. Moreover, efficient segmentation can be achieved even when there is considerable superposition of the inputs. The network dynamics are derived from an objective function that rewards sparse coding in the generalized amplitude-phase variables. We argue that this objective function can provide a possible formal interpretation of the binding problem and that the implementation of the network architecture and dynamics is biologically plausible.


Frontiers in Neuroinformatics | 2011

An Ultrascalable Solution to Large-scale Neural Tissue Simulation

James R. Kozloski; John Wagner

Neural tissue simulation extends requirements and constraints of previous neuronal and neural circuit simulation methods, creating a tissue coordinate system. We have developed a novel tissue volume decomposition, and a hybrid branched cable equation solver. The decomposition divides the simulation into regular tissue blocks and distributes them on a parallel multithreaded machine. The solver computes neurons that have been divided arbitrarily across blocks. We demonstrate thread, strong, and weak scaling of our approach on a machine with more than 4000 nodes and up to four threads per node. Scaling synapses to physiological numbers had little effect on performance, since our decomposition approach generates synapses that are almost always computed locally. The largest simulation included in our scaling results comprised 1 million neurons, 1 billion compartments, and 10 billion conductance-based synapses and gap junctions. We discuss the implications of our ultrascalable Neural Tissue Simulator, and with our results estimate requirements for a simulation at the scale of a human brain.


Neuroinformatics | 2011

Automated Reconstruction of Neural Tissue and the Role of Large-Scale Simulation

James R. Kozloski

Keywords DIADEM.High-throughput.Neuralreconstruction.Neural tissue.Simulation.Compartmentalmodeling.Hodgkin Huxley model.Structural modeling.Developmental modeling.Neuron growth.Parallelcomputing.Data decomposition.Neural Tissue Simulator.Blue GeneThe brain implements a myriad of global brain functions tosupport adaptive behaviors. Despite their seeming innumer-ability, these emerge from combinations of lower levelfunctions implemented by a relatively small set of braintissues. Evidence from brain imaging studies shows thatspatiotemporal patterns of activations across different braintissues correlate with brain function (and hence with anorganism’s behavior). To support a diversity of globalfunctions, gross connections between brain tissues, whilestructurally static, must undergo modulation. The strengthof this modulation can define functional boundaries andinterfaces between brain tissues: wherever functionalrelationships between brain regions are highly modulated,tissue boundaries occur.Tissue-level functions, while also diverse, are morestereotyped than global brain functions. Similar to spatio-temporal modulation and recombination of tissue activa-tion, variation and recombination of familiar structuralelements of the brain (neurons and their connections,synapses) generate tissue-level functions. Unlike otherorgans’ gross morphological specializations of singletissues (e.g., muscle, bone) brain specialization yieldsdistinct tissues derived from stationary statistical combina-tions of a variety of neuron and synapse types in space,which we define as microcircuitry. Measurable, consistentpatterning of microcircuitry across a tissue and in differentorganisms (i.e., stereotypy) further defines a tissue’sboundaries: wherever patterning changes abruptly, onetissue ends and another begins.Shepherd defined microcircuits abstractly and indepen-dent of neural tissues, based on simple computations theymight implement.


international conference on universal access in human computer interaction | 2009

Cognitive Impairments, HCI and Daily Living

Simeon Keates; James R. Kozloski; Philip Varker

As computer systems become increasingly more pervasive in everyday life, it is simultaneously becoming ever more important that the concept of universal access is accepted as a design mantra. While many physical impairments and their implications for human-computer interaction are well understood, cognitive impairments have received comparatively little attention. One of the reasons for this is the general lack of sufficiently detailed cognitive models. This paper examines how cognitive impairments can affect human-computer interaction in everyday life and the issues involved in trying to make information technology more accessible to users with cognitive impairments.


international symposium on neural networks | 2008

Efficient segmentation in multi-layer oscillatory networks

A. Ravishankar Rao; Guillermo A. Cecchi; Charles C. Peck; James R. Kozloski

In earlier work, we derived the dynamical behavior of a network of oscillatory units described by the amplitude and phase of oscillations. The dynamics were derived from an objective function that rewards both the faithfulness and the sparseness of representation. After unsupervised learning, the network is capable of separating mixtures of inputs, and also segmenting the inputs into components that most contribute to the classification of a given input object. In the current paper, we extend our analysis to multi-layer networks, and demonstrate that the dynamical equations derived earlier can be successfully applied to multi-layer networks. The topological connectivity between the different layers are derived from biological observations in primate visual cortex, and consist of receptive fields that are topographically mapped between layers. We explore the role of feedback connections, and show that increasing the diffusivity of feedback connections significantly improves segmentation performance, but does not affect separation performance.


human vision and electronic imaging conference | 2005

A model of the formation of a self-organized cortical representation of color

A. Ravishankar Rao; Guillermo A. Cecchi; Charles C. Peck; James R. Kozloski

In this paper we address the problem of understanding the cortical processing of color information. Unravelling the cortical representation of color is a difficult task, as the neural pathways for color processing have not been fully mapped, and there are few computational modelling efforts devoted to color. Hence, we first present a conjecture for an ideal target color map based on principles of color opponency, and constraints such as retinotopy and the two dimensional nature of the map. We develop a computational model for the cortical processing of color information that seeks to produce this target color map in a self-organized manner. The input model consists of a luminance channel and opponent color channels, comprising red-green and blue-yellow signals. We use an optional stage consisting of applying an antagonistic center-surround filter to these channels. The input is projected to a restricted portion of the cortical network in a topographic way. The units in the cortical map receive the color opponent input, and compete amongst each other to represent the input. This competition is carried out through the determination of a local winner. By simulating a self-organizing map for color according to this scheme, we are largely able to achieve the desired target color map. According to recent neurophysiological findings, there is evidence for the representation of color mixtures in the cortex, which is consistent with our model. Furthermore, an orderly traversal of stimulus hues in the CIE chromaticity map correspond to an orderly spatial traversal in the primate cortical area V2. Our experimental results are also consistent with this biological observation.


international conference on computational science | 2003

Simulation infrastructure for modeling large scale neural systems

Charles C. Peck; James R. Kozloski; A. Ravishankar Rao; Guillermo A. Cecchi

This paper describes the Large-scale Edge Node Simulator, a problem solving environment for the implementation of large scale models of neural systems. This work was motivated by the absence of adequate modeling tools for this domain. The object-oriented Large-scale Edge Node Simulator was developed after a rigorous requirements analysis for this class of simulations. An example use of this environment for a complex neural simulation of cortical plasticity is presented. It is shown that the Large-scale Edge Node Simulator is capable of meeting the challenges of simulating large scale models of neural systems.

Researchain Logo
Decentralizing Knowledge