Brain architecture: A design for natural computation
BBrain architecture: A design for naturalcomputation
By Marcus Kaiser † School of Computing Science, Newcastle University, Claremont Tower, Newcastleupon Tyne NE1 7RU, UKInstitute of Neuroscience, Newcastle University, Framlington Place, Newcastleupon Tyne NE2 4HH, UK
Fifty years ago, John von Neumann compared the architecture of the brain withthat of computers that he invented and which is still in use today. In those days, theorganisation of computers was based on concepts of brain organisation. Here, wegive an update on current results on the global organisation of neural systems. Forneural systems, we outline how the spatial and topological architecture of neuronaland cortical networks facilitates robustness against failures, fast processing, andbalanced network activation. Finally, we discuss mechanisms of self-organizationfor such architectures. After all, the organization of the brain might again inspirecomputer architecture.
Keywords: neural networks, computational neuroanatomy, network science,spatial graph, robustness, recovery
1. Introduction
The relation between the computer and the brain has always been of interest toscientists and the public alike. From the notion of ’thinking machines’ and ’artificialintelligence’ to applying concepts of neuroscience such as neural networks to solveproblems in computer science. Also the earliest computers, using the von Neumannarchitecture still in use today, used memory and a central processing unit basedon concepts of brain architecture (von Neumann, 1958). Also, models of artificialneural networks were inspired by the function of individual neurons as integra-tors of incoming signals. Detailed models of neural processing, however, are oftenlimited to single tasks (e.g., pattern recognition) and one modality (e.g., only vi-sual information). In addition, artificial neural networks starting with Perceptrons(Rosenblatt, 1959) are designed as a general purpose architecture whereas the archi-tecture of natural neural systems shows a high specialization according to differenttasks and functions. Global models, on the other hand, often deal with functionalcircuits (e.g. movement planning) without a direct link to the local structure of theneural network. Therefore, much of the complexity of neural processing in terms ofcombining local and global levels as well as integrating information from differentdomains is largely missing from current models.About 50 years ago, John von Neumann—inventor of the current computerarchitecture—thought about where computers and the brain are the same or where † Author for correspondence ([email protected]).
Article submitted to Royal Society
TEX Paper a r X i v : . [ q - b i o . N C ] F e b M. Kaiser they differ (von Neumann, 1958). After 50 years of technological progress, how dothe benchmark characteristics differ? The human brain consists of 10 neurons orprocessing units. The Internet, being the largest computer network, has only mil-lions of processing units. However, the extension of the Internet to mobile services(pervasive computing) could lead to billions of processing nodes in the future. Thehuman memory can be estimated from adjustable synaptic weights of connectionsbetween neurons. However, these 10 synapses/weights are only a first approxima-tion of the hard-wired information storage as the position of synapses, both absoluteon the target neuron and relative to other synapses influences signal integration.Computer memories have reached this level with some systems, such as the machinesthat store web information at Google, storing several petabytes (1 petabyte=10 bytes, see http://en.wikipedia.org/wiki/Petabyte). However, computer systems arestill far-away from processing complex information like the human brain does. Inspite of processing units or memory, the main difference between computers andbrains is their hardware architecture—how they are wired up.In this article, we present recent results on the topology (architecture) of com-plex brain networks. These results are not about standard (artificial) neural net-works that deal with one single task, e.g. face recognition. Rather, we look atthe high-level organization of the brain including modules for different tasks anddifferent sensory modalities (e.g., sound, vision, touch). Nonetheless, similar orga-nization (Buzsaki et al., 2004) and processing (Dyhrfjeld-Johnsen et al., 2007) hasbeen found at the local level of connectivity within modules.
2. Cortical network organization ( a ) Cluster organization
Cortical areas are brain modules which are defined by structural (microscopic)architecture. Observing the thickness and cell types of the cortical layers, severalcortical areas can be distinguished (Brodmann, 1909). Furthermore, areas also showa functional specialization. Within one area further sub-units (cortical columns)exist, however, these units will not be covered in this review as there is not enoughinformation about their connectivity. Using neuro-anatomical techniques, it canbe tested which areas are connected, that means that projections in one or bothdirections between the areas do exist. If a fiber projection between two areas isfound, the value ’1’ is entered in the adjacency matrix; the value ’0’ defines absentconnections or cases where the existence of connections was not tested (figure 1 a ).Contrary to popular belief, cortical networks are not completely connected, i.e. not ’everything is connected to everything else’: Only about 30% of all possibleconnections (arcs) between areas do exist. Instead, highly connected sets of nodes( clusters ) are found that correspond to functional differentiation of areas. For ex-ample, clusters corresponding to visual, auditory, somatosensory and fronto-limbicprocessing were found in the cat cortical connectivity network (Hilgetag & Kaiser,2004). Furthermore, about 20% of the connections are unidirectional (Felleman &van Essen, 1991), i.e. a direct projection from area A to area B but not vice versaexists. Although some of these connections might be bidirectional as the reversedirection was not tested, there were several cases where it was confirmed that pro- Article submitted to Royal Society rain architecture for natural computation ( b )( a ) Figure 1. ( a ) Adjacency Matrix of the cat connectivity network (55 nodes; 891 directededges). Dots represent ’ones’ and white spaces the ’zero’ entries of the adjacency matrix.( b ) Macaque cortex (95 nodes; 2,402 directed edges). Until now, there is not enough information about connectivity in the humanbrain that would allow network analysis (Crick & Jones, 1993). However, severalnew non-invasive methods including diffusion tensor imaging (Tuch et al., 2005)and resting state networks (Achard et al., 2006) are under development and mighthelp to define human connectivity in the future. At the moment, however, we arebound to analyze known connectivity in the cat and the macaque (rhesus monkey,Fig. 1 b ) cortical networks (see also Passingham et al., 2002; Sporns et al., 2004).Both networks exhibit clusters, i.e. areas belonging to a cluster have many existingconnections between them but there are few connections to areas of different clusters(Young, 1993; Scannell et al., 1995). These clusters are also functional and spatialunits. Two connected areas tend to be spatially adjacent on the cortical surfaceand tend to have a similar function (e.g., both taking part in visual processing).Whereas there is a preference for short-length connections to spatially neighboringareas for the macaque, about 10% of the connections cover a long-distance ( ≥ Small-world properties
Many complex networks exhibit properties of small-world networks (Watts &Strogatz, 1998). In these networks neighbors are better connected than in compara-
Article submitted to Royal Society
M. Kaiser ble Erd¨os-R´enyi random networks (Erd¨os & R´enyi, 1960) (called random networksthroughout the text) whereas the average path length remains as low as in randomnetworks. Formally, the average shortest path (ASP, similar, though not identical,to characteristic path length (cid:96) (Watts, 1999)) of a network with N nodes is theaverage number of edges that has to be crossed on the shortest path from any onenode to another: ASP = 1 N ( N − (cid:88) i,j d ( i, j ) with i (cid:54) = j, (2.1)where d ( i, j ) is the length of the shortest path between nodes i and j .The neighborhood connectivity is usually measured by the clustering coefficient.The clustering coefficient of one node v with k v neighbors is C v = | E (Γ v ) | (cid:0) k v (cid:1) , (2.2)where | E (Γ v ) | is the number of edges in the neighborhood of v and (cid:0) k v (cid:1) is thenumber of possible edges (Watts, 1999). In the following analysis, we use the termclustering coefficient as the average clustering coefficient for all nodes of a network.Small-world properties were found on different organizational levels of neuralnetworks: from the tiny nematode C. elegans with about 300 neurons (Watts &Strogatz, 1998) to cortical networks of the cat and the macaque (Hilgetag et al.,2000; Hilgetag & Kaiser, 2004). Whereas the clustering coefficient for the macaqueis 49% (16% in random networks), the ASP is comparably low with 2.2 (2.0 inrandom networks). That is, on average only one or two intermediate areas are onthe shortest path between two areas. Note that a high clustering coefficient does notnecessarily correlate with the existence of multiple clusters. Indeed, the standardmodel for generating small-world networks by rewiring regular networks (Watts &Strogatz, 1998) does not lead to multiple clusters.
3. Robustness and recovery
Compared to technical networks (power grids or communication networks), thebrain is remarkably robust towards damage. On the local level, Parkinson’s diseasein humans only becomes apparent after more than half of the cells in the responsiblebrain region are eliminated (Damier et al., 1999). On the global level, the loss of thewhole primary visual cortex (areas 17, 18 and 19) in kittens can be compensatedby another region, the postero-medial supra-sylvian area (PMLS) (Spear et al.,1988). On the other hand, the removal of a small number of nodes or edges of thenetwork can lead to a breakdown of functional processing. As functional deficitsare not related to the number or size of removed connections or brain tissue, itmight be the role within the network that makes some elements more critical thanothers. Identifying these critical components has applications in neurosurgery whereimportant parts of the brain should remain intact even after the removal of a braintumour and its surrounding tissue.
Article submitted to Royal Society rain architecture for natural computation Critical connections in neural systems
It was found that the robustness towards edge removal is linked to the highneighborhood connectivity and the existence of multiple clusters (Kaiser & Hilge-tag, 2004a). For connections within clusters, many alternative pathways of com-parable length do exist once one edge is removed from the cluster (figure 2 a ). Foredges between clusters, however, alternative pathways of comparable length areunavailable and removal of such edges should have a larger effect on the network.The damage to the macaque network was measured as the increase of the ASPafter single edge removal. Among several measures, edge frequency (approximatemeasure of edge betweenness) of an edge was the best predictor of the damage afteredge elimination (linear correlation r=0.8 for macaque). The edge frequency of anedge counts the number of shortest paths in which the edge is included.Furthermore, examining comparable benchmark networks with three clusters,edges with high edge frequency are the ones between clusters. In addition, removalof these edges causes the largest damage as increase in ASP (figure 2 b ). Therefore,inter-cluster connections are critical for the network. Concerning random loss offiber connections, however, in most cases one of the many connections within acluster will be damaged with little effect on the network. The chances of eliminatingthe fewer inter-cluster connections are lower. Therefore, the network is robust torandom removal of an edge (Kaiser & Hilgetag, 2004a). Figure 2. ( a ) Schematic drawing of a network with three clusters showing examples foran intra- (gray dashed line) and inter-cluster (gray solid line) connection. ( b ) Edge fre-quency of the eliminated edge vs. ASP after edge removal (20 generated networks withthree clusters, defined inter-cluster connections and random connectivity within clusters;inter-cluster connections: light-gray; connections within a cluster: black). Node removal behaviour similar to that of scale-free networks
In addition to high neighborhood clustering, many real-world networks haveproperties of scale-free networks (Barab´asi & Albert, 1999). In such networks, theprobability for a node possessing k edges is P ( k ) ∝ k − γ . Therefore, the degreedistribution—where the degree of a node is the number of its connections—followsa power-law. This often results in highly connected nodes that would be unlikelyto occur in random networks. Technical networks such as the world wide web of Article submitted to Royal Society
M. Kaiser links between web pages (Huberman & Adamic, 1999) and the Internet (Faloutsoset al., 1999) at the level of connections between domains/autonomous systems. Docortical networks, as natural communication networks, share similar features?In cortical networks, some structures (e.g. evolutionary older structures like theAmygdala) are highly connected. Unfortunately, the degree distribution can not betested directly as less than 100 nodes are available in the cat and macaque corticalnetworks. However, using the node elimination pattern as an indirect measure,cortical networks were found to be similar to scale-free benchmark networks (Kaiseret al., 2007b).In that approach, we tested the effect on the ASP of the macaque corticalnetwork after subsequently eliminating nodes from the network until all nodes wereremoved (Albert et al., 2000). For random elimination, the increase in ASP wasslow and reached a peak for a high fraction of deleted nodes before shrinking due tonetwork fragmentation (figure 3 a ). When taking out nodes in a targeted way rankedby their connectivity (deleting the most highly connected nodes first), however,increase in ASP was steep and a peak was reached at a fraction of about 35%. Thecurves for random and targeted node removal were similar for the benchmark scale-free networks (figure 3 b ) but not for generated random or small-world (Watts &Strogatz, 1998) networks (Kaiser et al., 2007b). Therefore, cortical as well as scale-free benchmark systems are robust to random node elimination but show a largerincrease in ASP after removing highly connected nodes. Again, as for the edges,only few nodes are highly connected and therefore critical so that the probabilityto select them randomly is low. ASP ( a ) ASP ( b ) Figure 3. Average shortest path (ASP) after either random (dashed line) or targeted (graysolid line) subsequent node removal. ( a ) Macaque cortical network (73 nodes, 835 directededges). ( b ) Scale-free benchmark network with the same number of nodes and edges (linesrepresent the average values over 50 generated networks and 50 runs each in the case ofrandom node removal).
4. Processing
Wiring constraints for processing
For microchips, increasing the length of electric wires increases the energy lossthrough heat dissipation. Inspired by these ideas, it was suggested that neural
Article submitted to Royal Society rain architecture for natural computation
C. elegans neu-ral network and subsets of cortical networks—that components are indeed opti-mally placed (Cherniak, 1994). This means that all node position permutations ofthe network—while connections are unchanged—results in higher total connectionlength. Therefore, the placement of nodes is optimized to minimize the total wiringlength. However, using larger data sets than used in the original study, we foundthat a reduction in wiring length by swapping the position of network nodes waspossible.For the macaque, we analyzed wiring length using the spatial three-dimensionalpositions of 95 areas and their connectivity. The total wiring length was between thecase of only establishing the shortest-possible connections and establishing connec-tions randomly regardless of distance (figure 4 a ). A reduction of the wiring lengthwas possible due to the number of long-distance connections in the original networks(Kaiser & Hilgetag, 2004b); some of them even spanning almost the largest possibledistance between areas. Why would these metabolically expensive connections existin such large numbers? We tested the effect of removing all long-distance connec-tions and replacing them by short-distance connections. Whereas several networkmeasures improved, the value for the ASP increased when long-distance connec-tions were unavailable (figure 4 b ). Retaining a lower ASP has two benefits: First,there are fewer intermediate areas that might distort the signal. Second, as fewerareas are part of shortest paths, the transmission delay along a pathway is reduced.The propagation of signals over long distances, without any delay imposed by in-termediate nodes, has an effect on synchronization as well: both nearby (directlyconnected) areas and faraway areas are able to get a signal at about the sametime and could have synchronous processing (Kaiser & Hilgetag, 2006). A low ASPmight also be necessary because of the properties of neurons: John von Neumann,taking into account the low processing speed and accuracy of individual neurons,suggested that neural computation needed to be highly parallel with using a lownumber of subsequent processing steps (von Neumann, 1958). But having a lowASP also brings a potential danger: How can it be prevented that information oractivity flows uncontrolled through the entire network? Balanced network activation through hierarchical connectivity
Few processing steps enable the rapid transfer of activation patterns throughcortical networks but this flow could potentially activate the whole brain. Suchlarge-scale activations in the form of increased activity can be observed in thehuman brain during epileptic seizures: about 1% of the population is currentlyaffected by epilepsy. In contrast to computer networks with a continuous flow ofviruses and spam e-mails, the brain has some built-in mechanisms for preventinglarge-scale activation.An essential requirement for the representation of functional patterns in complex
Article submitted to Royal Society
M. Kaiser
Figure 4. ( a ) Original placement of cortical areas. ( b ) Wiring length optimization leadsto a reduction in total wiring length by 32% of the original length. ( c ) Placement afteroptimization for total wiring length. neural networks, such as the mammalian cerebral cortex, is the existence of stablenetwork activations within a limited critical range. In this range, the activity ofneural populations in the network persists between the extremes of quickly dyingout, or activating a large part of the network as during epileptic seizures. Thestandard model would be to achieve such a balance by having interacting excitatoryand inhibitory neurons. Whereas such models are of great value on the local levelof neural systems, they are less meaningful when trying to understand the globallevel of connections between columns, areas, or area clusters.Global corticocortical connectivity (connections between brain areas) in mam-mals possesses an intricate, nonrandom organization. Projections are arranged inclusters of cortical areas, which are closely linked among each other, but less fre-quently with areas in other clusters. Such structural clusters broadly agree withfunctional cortical subdivisions. This cluster organization is found at several lev-els: Neurons within a column, area or area cluster (e.g. visual cortex) are morefrequently linked with each other than with neurons in the rest of the network(Hilgetag & Kaiser, 2004).Using a basic spreading model without inhibition, we investigated how func-tional activations of nodes propagate through such a hierarchically clustered net-work (Kaiser et al., 2007a). The hierarchical network consisted of 1000 nodes madeof 10 clusters with 100 nodes each. In addition, each cluster consisted of 10 sub-clusters with 10 nodes each (figure 5 a, b ). Connections were arranged so that therewere more links within (sub-)clusters than between (sub-)clusters. Starting withactivating 10% of randomly chosen nodes, nodes became activated if at least sixdirectly connected nodes were active. Furthermore, at each time step, activatednodes could become inactive with a probability of 30%.The simulations demonstrated that persistent and scalable activation could beproduced in clustered networks, but not in random or small-world networks of thesame size (figure 5 c-e ). Robust sustained activity also occurred when the number ofconsecutive activated states of a node was limited due to exhaustion. These findingswere consistent for threshold models as well as integrate-and-fire models of nodesindicating that the topology rather than the activity model was responsible forbalanced activity. In conclusion, hierarchical cluster architecture may provide thestructural basis for the stable and diverse functional patterns observed in corticalnetworks. But how do networks with such properties arise? Article submitted to Royal Society rain architecture for natural computation F r a c t i o n o f a c t i v a t e d n o d e s ( a ) ( b )( c ) ( d ) ( e ) Figure 5. ( a ) The hierarchical network organization ranges from cluster such as the visualcortex to sub-cluster such as V1 to individual nodes being cortical columns. ( b ) Schematicview of a hierarchical cluster network with five clusters containing five sub-clusters each.Examples for spread of activity in ( c ) random, ( d ) small-world and ( e ) hierarchical clusternetworks ( i = 100 , i = 150), based on 20 simulations for each network.
5. Design vs. Self-organization
Neural systems, rather than being designed, evolved over millions of years. Startingfrom diffuse homogeneous networks, network clusters evolved when different taskshad to be implemented. During individual brain development, the architecture isformed by a combination of genetic blueprint and self-organization (Striedter, 2005).What are the mechanisms of self-organization during network development? Apossible algorithm for developing spatial networks with long-distance connectionsand small-world connectivity is spatial growth (Kaiser & Hilgetag, 2004c). In thisapproach, the probability to establish a connection decays with the spatial (Eu-clidean) distance thereby establishing a preference for short-distance connections.This assumption is reasonable for neural networks as the concentration of growthfactors decays with the distance to the source so that faraway neurons have a lowerprobability to detect the signal and sent a projection toward the source region of
Article submitted to Royal Society M. Kaiser the growth factor. In addition, anatomical studies have shown that the probabilityof establishing a connections decreases with the distance between neurons.In contrast to previous approaches that generated spatial graphs, the node po-sitions were not determined before the start of connection establishment. Instead,starting with one node, a new node was added at each step at a randomly chosenspatial position. For all existing nodes, a connection between the new node u andan existing node v was established with probability P ( u, v ) = β e − α d ( u,v ) , (5.1)where d ( u, v ) was the spatial distance between the node positions, and α and β were scaling coefficients shaping the connection probability. A new node that didnot manage to establish connections was removed from the network. Node gen-eration was repeated until the desired number of nodes was established. Parame-ter β (”density”) served to adjust the general probability of edge formation. Thenonnegative coefficient α (”spatial range”) regulated the dependence of edge for-mation on the distance to existing nodes. Depending on the parameters α and β , spatial growth could yield networks similar to small-world cortical, scale-freehighway-transportation networks as well as networks in non-Euclidean spaces suchas metabolic networks (Kaiser & Hilgetag, 2004c). Specifically, it was possible togenerate networks with similar wiring organization than the macaque cortical net-work (Kaiser & Hilgetag, 2004b). Using different time domains for connection devel-opment, where several spatial regions of the network establish connections in partlyoverlapping time windows, allows the generation of multiple clusters or communities(Kaiser & Hilgetag, 2007).
6. Outlook
Natural neural systems, such as cortical networks of connections between brain re-gions, have developed several properties that are desirable for computers as well.Cortical networks show an innate ability to compensate for and recover from dam-ages to the network. Whereas removing the few highly-connected nodes has a largeeffect on network structure, a random removal of nodes or edges has a small effect inmost of the cases. In addition, the spatial layout of cortical and neuronal networksexhibiting several long-distance connections ensures few processing steps and thusa faster response time. Speculating about the future, these mechanism for robustand rapid processing might provide new ideas for artificial neural network as wellas for computer architecture. As the ’programme’ of the brain is implemented inits wiring organization, the topology of the brain might inspire theoretical work inthe organization of parallel processing and integration.Towards these topics, we currently work on three questions. First, to identifyproperties for robust processing in the brain. This includes understanding mech-anisms for recovery in neural systems. These mechanisms will then be applied tocomputer networks to see if they can lead to faster recovery after failure. Second,to investigate epileptic spreading in cortical networks. We intend to determine howthe network structure influences activity or, for the disease state, seizure spreadingin cortical networks. The more general analysis of spreading in networks could giveuseful insights in how to prevent virus spreading in communication networks. Fi-nally, to find principles that guide the development of neural networks over time.
Article submitted to Royal Society rain architecture for natural computation
References
Achard, S., Salvador, R., Whitcher, B., Suckling, J., & Bullmore, E. (2006). A resilient,low-frequency, small-world human brain functional network with highly connected as-sociation cortical hubs.
Journal of Neuroscience , 26:63–72.Albert, R. & Barab´asi, A.-L. (2002). Statistical mechanics of complex networks.
Rev.Mod. Phys. , 74(1):47–97.Albert, R., Jeong, H., & Barab´asi, A.-L. (2000). Error and attack tolerance of complexnetworks.
Nature , 406:378–382.Barab´asi, A.-L. & Albert, R. (1999). Emergence of scaling in random networks.
Science ,286:509–512.Brodmann, K. (1909).
Vergleichende Lokalisationslehre der Grosshirnrinde in ihrenPrinzipien dargestellt auf Grund des Zellenbaues.
Barth, Leipzig.Buzsaki, G., Geisler, C., Henze, D. A., & Wang, X.-J. (2004). Interneuron diversity series:Circuit complexity and axon wiring economy of cortical interneurons.
Trends Neurosci. ,27(4):186–193.Cherniak, C. (1994). Component placement optimization in the brain.
J. Neurosci. ,14(4):2418–2427.Crick, F. & Jones, E. (1993). Backwardness of human neuroanatomy.
Nature ,361(6408):109–110.Damier, P., Hirsch, E. C., Agid, Y., & Graybiel, A. M. (1999). The substantia nigra of thehuman brain. II. patterns of loss of dopamine-containing neurons in parkinson’s disease.
Brain , 122:1437–1448.Dyhrfjeld-Johnsen, J., Santhakumar, V., Morgan, R. J., Huerta, R., Tsimring, L., &Soltesz, I. (2007). Topological determinants of epileptogenesis in large-scale structuraland functional models of the dentate gyrus derived from experimental data.
J. Neuro-physiol. , 97:1566–1587.Erd¨os, P. & R´enyi, A. (1960). On the evolution of random graphs.
Publ. Math. Inst.Hung. Acad. Sci. , 5:17–61.Faloutsos, M., Faloutsos, P., & Faloutsos, C. (1999). On power-law relationships of theinternet topology.
ACM SIGCOMM Comput. Commun. Rev. , 29:251–262.Felleman, D. J. & van Essen, D. C. (1991). Distributed hierarchical processing in theprimate cerebral cortex.
Cereb. Cortex , 1:1–47.Hilgetag, C. C., Burns, G. A. P. C., O’Neill, M. A., Scannell, J. W., & Young, M. P.(2000). Anatomical connectivity defines the organization of clusters of cortical areas inthe macaque monkey and the cat.
Phil. Trans. R. Soc. Lond. B , 355:91–110.Hilgetag, C. C. & Kaiser, M. (2004). Clustered organisation of cortical connectivity.
Neuroinformatics , 2:353–360.Huberman, B. A. & Adamic, L. A. (1999). Growth dynamics of the world-wide web.
Nature , 401:131.Kaiser, M., Goerner, M., & Hilgetag, C. C. (2007a). Criticality of spreading dynamics inhierarchical cluster networks without inhibition.
New J. Phys. , 9:110.Kaiser, M. & Hilgetag, C. C. (2004a). Edge vulnerability in neural and metabolic networks.
Article submitted to Royal Society M. Kaiser
Biol. Cybern. , 90:311–317.Kaiser, M. & Hilgetag, C. C. (2004b). Modelling the development of cortical networks.
Neurocomp. , 58–60:297–302.Kaiser, M. & Hilgetag, C. C. (2004c). Spatial growth of real-world networks.
Phys. Rev.E , 69:036103.Kaiser, M. & Hilgetag, C. C. (2006). Nonoptimal component placement, but short pro-cessing paths, due to long-distance projections in neural systems.
PLoS Comput. Biol. ,e95.Kaiser, M. & Hilgetag, C. C. (2007). Development of multi-cluster cortical networks bytime windows for spatial growth.
Neurocomp. , 70(10–12):1829–1832.Kaiser, M., Martin, R., Andras, P., & Young, M. P. (2007b). Simulation of robustnessagainst lesions of cortical networks.
Eur. J. Neurosci. , 25:3185–3192.Passingham, R. E., Stephan, K. E., & K¨otter, R. (2002). The anatomical basis of functionallocalization in the cortex.
Nat. Rev. Neurosci. , 3:606–616.Rosenblatt, F. (1959).
Principles of Neurodynamics . Spartan Books, New York.Scannell, J., Blakemore, C., & Young, M. (1995). Analysis of connectivity in the catcerebral cortex.
J. Neurosci. , 15(2):1463–1483.Spear, P., Tong, L., & McCall, M. (1988). Functional influence of areas 17, 18 and 19on lateral suprasylvian cortex in kittens and adult cats: implications for compensationfollowing early visual cortex damage.
Brain Res. , 447(1):79–91.Sporns, O., Chialvo, D. R., Kaiser, M., & Hilgetag, C. C. (2004). Organization, develop-ment and function of complex brain networks.
Trends Cogn. Sci. , 8:418–425.Sporns, O., Tononi, G., & Edelman, G. M. (2000). Theoretical neuroanatomy: Relat-ing anatomical and functional connectivity in graphs and cortical connection matrices.
Cereb. Cortex , 10:127–141.Striedter, G. F. (2005).
Principles of Brain Evolution . Sinauer Associates.Tuch, D. S., Wisco, J. J., Khachaturian, M. H., Ekstrom, L. B., K¨otter, R., & Vanduffel,W. (2005). Q-ball imaging of macaque white matter architecture.
Phil. Trans. R. Soc.B , 360:869–879.von Neumann, J. (1958).
The Computer and the Brain . Yale University Press.Watts, D. J. (1999).
Small Worlds . Princeton University Press, Princeton.Watts, D. J. & Strogatz, S. H. (1998). Collective dynamics of ’small-world’ networks.
Nature , 393:440–442.Young, M. P. (1993). The organization of neural systems in the primate cerebral cortex.
Phil. Trans. R. Soc. , 252:13–18.