Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher Johansson is active.

Publication


Featured researches published by Christopher Johansson.


Neural Networks | 2007

Towards cortex sized artificial neural systems

Christopher Johansson; Anders Lansner

We propose, implement, and discuss an abstract model of the mammalian neocortex. This model is instantiated with a sparse recurrently connected neural network that has spiking leaky integrator units and continuous Hebbian learning. First we study the structure, modularization, and size of neocortex, and then we describe a generic computational model of the cortical circuitry. A characterizing feature of the model is that it is based on the modularization of neocortex into hypercolumns and minicolumns. Both a floating- and fixed-point arithmetic implementation of the model are presented along with simulation results. We conclude that an implementation on a cluster computer is not communication but computation bounded. A mouse and rat cortex sized version of our model executes in 44% and 23% of real-time respectively. Further, an instance of the model with 1.6 x 10(6) units and 2 x 10(11) connections performed noise reduction and pattern completion. These implementations represent the current frontier of large-scale abstract neural network simulations in terms of network size and running speed.


Ibm Journal of Research and Development | 2008

Brain-scale simulation of the neocortex on the IBM Blue Gene/L supercomputer

Mikael Djurfeldt; Mikael Lundqvist; Christopher Johansson; Martin Rehn; Örjan Ekeberg; Anders Lansner

Biologically detailed large-scale models of the brain can now be simulated thanks to increasingly powerful massively parallel supercomputers. We present an overview, for the general technical reader, of a neuronal network model of layers II/III of the neocortex built with biophysical model neurons. These simulations, carried out on an IBM Blue Gene/L™ supercomputer, comprise up to 22 million neurons and 11 billion synapses, which makes them the largest simulations of this type ever performed. Such model sizes correspond to the cortex of a small mammal. The SPLIT library, used for these simulations, runs on single-processor as well as massively parallel machines. Performance measurements show good scaling behavior on the Blue Gene/L supercomputer up to 8,192 processors. Several key phenomena seen in the living brain appear as emergent phenomena in the simulations. We discuss the role of this kind of model in neuroscience and note that full-scale models may be necessary to preserve natural dynamics. We also discuss the need for software tools for the specification of models as well as for analysis and visualization of output data. Combining models that range from abstract connectionist type to biophysically detailed will help us unravel the basic principles underlying neocortical function.


Neurocomputing | 2006

Attractor neural networks with patchy connectivity

Christopher Johansson; Martin Rehn; Anders Lansner

The neurons in the mammalian visual cortex are arranged in columnar structures, and the synaptic contacts of the pyramidal neurons in layer II/III are clustered into patches that are sparsely distributed over the surrounding cortical surface. Here, we use an attractor neural-network model of the cortical circuitry and investigate the effects of patchy connectivity, both on the properties of the network and the attractor dynamics. An analysis of the network shows that the signal-to-noise ratio of the synaptic potential sums are improved by the patchy connectivity, which results in a higher storage capacity. This analysis is performed for both the Hopfield and Willshaw learning rules and the results are confirmed by simulation experiments.


Neural Computation | 2007

Imposing Biological Constraints onto an Abstract Neocortical Attractor Network Model

Christopher Johansson; Anders Lansner

In this letter, we study an abstract model of neocortex based on its modularization into mini- and hypercolumns. We discuss a full-scale instance of this model and connect its network properties to the underlying biological properties of neurons in cortex. In particular, we discuss how the biological constraints put on the network determine the networks performance in terms of storage capacity. We show that a network instantiating the model scales well given the biologically constrained parameters on activity and connectivity, which makes this network interesting also as an engineered system. In this model, the minicolumns are grouped into hypercolumns that can be active or quiescent, and the model predicts that only a few percent of the hypercolumns should be active at any one time. With this model, we show that at least 20 to 30 pyramidal neurons should be aggregated into a minicolumn and at least 50 to 60 minicolumns should be grouped into a hypercolumn in order to achieve high storage capacity.


Lecture Notes in Computer Science | 2006

Attractor memory with self-organizing input

Christopher Johansson; Anders Lansner

We propose a neural network based autoassociative memory system for unsupervised learning. This system is intended to be an example of how a general information processing architecture, similar to that of neocortex, could be organized. The neural network has its units arranged into two separate groups called populations, one input and one hidden population. The units in the input population form receptive fields that sparsely projects onto the units of the hidden population. Competitive learning is used to train these forward projections. The hidden population implements an attractor memory. A back projection from the hidden to the input population is trained with a Hebbian learning rule. This system is capable of processing correlated and densely coded patterns, which regular attractor neural networks are very poor at. The system shows good performance on a number of typical attractor neural network tasks such as pattern completion, noise reduction, and prototype extraction.


international conference on knowledge-based and intelligent information and engineering systems | 2004

Towards Cortex Sized Artificial Nervous Systems

Christopher Johansson; Anders Lansner

We characterize the size and complexity of the mammalian cortices of human, macaque, cat, rat, and mouse. We map the cortical structure onto a Bayesian confidence propagating neural network (BCPNN). An architectural structure for the implementation of the BCPNN based on hypercolumnar modules is suggested. The bandwidth, memory, and computational demands for real-time operation of the system are calculated and simulated. It is concluded that the limiting factor is the computational and not the communication requirements.


OSPEL Workshop on Bio-inspired Signal Processing Barcelona, SPAIN, 2007 | 2009

From ANN to Biomimetic Information Processing

Anders Lansner; Simon Benjaminsson; Christopher Johansson

Artificial neural networks (ANN) are useful components in today’s data analysis toolbox. They were initially inspired by the brain but are today accepted to be quite different from it. ANN typically lack scalability and mostly rely on supervised learning, both of which are biologically implausible features. Here we describe and evaluate a novel cortex-inspired hybrid algorithm. It is found to perform on par with a Support Vector Machine (SVM) in classification of activation patterns from the rat olfactory bulb. On-line unsupervised learning is shown to provide significant tolerance to sensor drift, an important property of algorithms used to analyze chemo-sensor data. Scalability of the approach is illustrated on the MNIST dataset of handwritten digits.


Neurocomputing | 2009

Implementing plastic weights in neural networks using low precision arithmetic

Christopher Johansson; Anders Lansner

In this letter, we develop a fixed-point arithmetic, low precision, implementation of an exponentially weighted moving average (EWMA) that is used in a neural network with plastic weights. We analyze the proposed design both analytically and experimentally, and we also evaluate its performance in the application of an attractor neural network. The EWMA in the proposed design has a constant relative truncation error, which is important for avoiding round-off errors in applications with slowly decaying processes, e.g. connectionist networks. We conclude that the proposed design offers greatly improved memory and computational efficiency compared to a naive implementation of the EWMAs difference equation, and that it is well suited for implementation in digital hardware.


Lecture Notes in Computer Science | 2004

Towards Cortex Sized Attractor ANN

Christopher Johansson; Anders Lansner

We review the structure of cerebral cortex to find out the number of neurons and synapses and its modular structure. The organization of these neurons is then studied and mapped onto the framework of an artificial neural network (ANN). The computational requirements to run this ANN model are then estimated. The conclusion is that it is possible to simulate the mouse cortex today on a cluster computer but not in real-time.


International Journal of Neural Systems | 2006

CLUSTERING OF STORED MEMORIES IN AN ATTRACTOR NETWORK WITH LOCAL COMPETITION

Christopher Johansson; Örjan Ekeberg; Anders Lansner

In this paper we study an attractor network with units that compete locally for activation and we prove that a reduced version of it has fixpoint dynamics. An analysis, complemented by simulation experiments, of the local characteristics of the networks attractors with respect to a parameter controlling the intensity of the local competition is performed. We find that the attractors are hierarchically clustered when the parameter of the local competition is changed.

Collaboration


Dive into the Christopher Johansson's collaboration.

Top Co-Authors

Avatar

Anders Lansner

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anders Sandberg

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Rehn

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Örjan Ekeberg

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mikael Djurfeldt

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mikael Lundqvist

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Simon Benjaminsson

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge