Porting HTM Models to the Heidelberg Neuromorphic Computing Platform
PPorting HTM Models to the Heidelberg NeuromorphicComputing Platform
Sebastian Billaudelle ∗ , Subutai Ahmad † Kirchhoff-Institute for Physics, Heidelberg, Germany Numenta, Inc., Redwood City, CA
ABSTRACT
Hierarchical Temporal Memory (HTM) is a computational theoryof machine intelligence based on a detailed study of the neocortex.The Heidelberg Neuromorphic Computing Platform, developedas part of the Human Brain Project (HBP), is a mixed-signal(analog and digital) large-scale platform for modeling networksof spiking neurons. In this paper we present the first effort inporting HTM networks to this platform. We describe a frameworkfor simulating key HTM operations using spiking network models.We then describe specific spatial pooling and temporal memoryimplementations, as well as simulations demonstrating thatthe fundamental properties are maintained. We discuss issuesin implementing the full set of plasticity rules using Spike-Timing Dependent Plasticity (STDP), and rough place and routecalculations. Although further work is required, our initial studiesindicate that it should be possible to run large-scale HTM networks(including plasticity rules) efficiently on the Heidelberg platform.More generally the exercise of porting high level HTM algorithmsto biophysical neuron models promises to be a fruitful area ofinvestigation for future studies.
The mammalian brain, particularly that of humans, is able toprocess diverse sensory input, learn and recognize complexspatial and temporal patterns, and generate behaviour basedon context and previous experiences. While computersare efficient in carrying out numerical calculations, theyfall short in solving cognitive tasks. Studying the brainand the neocortex in particular is an important step todevelop new algorithms closing the gap between intelligentorganisms and artificial systems. Numenta is a companydedicated to developing such algorithms and at the sametime investigating the principles of the neocortex. TheirHierarchical Temporal Memory (HTM) models are designedto solve real world problems based on neuroscience resultsand theories.Efficiently simulating large-scale neural networks insoftware is still a challenge. The more biophysical detailsa model features, the more computational ressourcesit requires. Different techniques for speeding up theexecution of such implementations exist, e.g. by parallelizing ∗ email: [email protected] † email: [email protected] calculations. Dedicated hardware platforms are alsobeing developed. Digital neuromorphic hardware likethe SpiNNaker platform often features highly parallelizedprocessing architectures and optimized signal routing[Furber et al., 2014]. On the other hand, analog systemsdirectly emulate the neuron’s behavior in electronicmicrocircuits. The Hybrid Multi-Scale Facility (HMF) isa mixed-signal platform developed in the scopes of theBrainScaleS Project (BSS) and Human Brain Project (HBP).In this paper we present efforts in porting HTM networksto the HMF. A framework for simulating HTMs based onspiking neural networks is introduced, as well as concretenetwork models for the HTM concepts spatial pooling andthe temporal memory. We compare the behavior to softwareimplementations in order to verify basic properties of theHTM networks. We discuss the overall applicability ofthese models on the target platform, the impact of synapticplasticity, and connection routing considerations. HTM represents a set of concepts and algorithms for machineintelligence based on neocortical principles [Hawkins et al.,2011]. It is designed to learn spatial as well as temporalpatterns and generate predictions from previously seensequences. It features continuous learning and operates onstreaming data. An HTM network consists of one or multiplehierarchically arranged regions. The latter contain neuronsorganized in columns. The functional principle is capturedin two algorithms which are laid out in detail in the originalwhitepaper [Hawkins et al., 2011]. The following paragraphsare intended as an introductory overview and introduce theproperties relevant to this work.The spatial pooler is designed to map a binary inputvector to a set of columns. By recognizing previously seeninput data, it increases stability and reduces the system’ssusceptibility for noise. Its behaviour can be characterizedby the following properties:1. The columnar activity is sparse. Typically, 40 out of 2,048colums are active, which is approximately a sparsity of2 %. The number of active columns is constant in eachtime step and does not depend on the input sparsity.2. The spatial pooler activates the k columns which receivethe most input. In case of a tie between two columns,the active column is selected randomly, e.g. through a r X i v : . [ q - b i o . N C ] F e b illaudelle et al. structural advantages of certain cells compared to itsneighbors.3. Stimuli with low pairwise overlap counts are mappedto sparse columnar representations with low pairwiseoverlap counts, while high overlaps are projected ontorepresentations with high overlap. Thus, similar inputvectors lead to a similar columnar activation, whiledisjunct stimuli activate distinct columns.4. A column must receive a minimum input (e.g. 15 bits) tobecome active.The temporal memory operates on single cells withincolumns and further processes the spatial pooler’s output.Temporal sequences are learned by the network and can beused for generating predictions and highlighting anomalies.Individual cells receive stimuli from other neurons on theirdistal dendrites. This lateral input provides a temporalcontext. By modifying a cell’s distal connectivity, temporalsequences can be learned and predicted. The temporalmemory’s behavior can be summarized by the following:1. Individual cells receive lateral input on their distaldendrites. In case a certain threshold is crossed, thecells enter a predictive (depolarized) state.2. When a column becomes active due to proximal input,it activates only those cells that are in predictive state.3. When a column with no predicted cells becomes activedue to proximal input, all cells in the column becomeactive. This phenomenon is referred to as columnarbursting. The HMF is a hybrid platform consisting of a traditionalhigh-performance cluster and a neuromorphic system. It isdeveloped primarily at the Kirchhoff-Institute for Physics inHeidelberg and the TU Dresden while receiving funding fromthe BSS and HBP [HBP SP9 partners, 2014]. The platform’score is the wafer-scale integrated High Input Count AnalogNeural Network (HICANN) chip as shown in Figure 1. Partof the chip’s unique design is its mixed-signal architecturefeaturing analog neuron circuits and a digital communicationinfrastructure. Due to the short intrinsic time constants ofthe hardware neurons, the system operates on an acceleratedtimescale with a speed-up factor of 10 × compared tobiological real-time.HICANN features 512 neurons or dendritic membranecircuits . Each circuit can be stimulated via 226 synapses ontwo synaptic inputs. As a default, the latter are configuredfor excitatory and inhibitory stimuli, respectively. However,they can be set up to represent e.g. two excitatory inputswith different synaptic time constants or reversal potentials. Fig. 1.
A wafer containing 384 HICANN chips. The undiced waferundergoes a custom post-processing step where additional metallayers are applied to establish inter-reticle connectivity and powerdistribution. (Photo courtesy of the Electronic Vision(s) group,Heidelberg.)
By connecting multiple dendritic membranes larger neuronswith up to 14 × synapses can be formed.A single wafer contains 384 chips with 200 × neuronsand 45 × synapses. Multiple wafers can be connected toform even larger networks. The BSS’s infrastructure consistsof six wafers and is being extended to 20 wafers for the firstHBP milestone. There exist different techniques of varying complexity forsimulating networks of spiking neurons. The referenceimplementation we use for HTM networks is basedon first generation, binary neurons with discrete timesteps [Numenta, Inc]. Third generation models, however,incorporate the concept of dynamic time and implementinter-neuron communication based individual spikes.Starting from the original Hodgkin-Huxley equations[Hodgkin and Huxley, 1952], multiple spiking neuronmodels were developed that feature different levels of detailand abstraction. The HICANN chip implements AdaptiveExponential Integrate-and-Fire model (AdEx) neurons [Bretteand Gerstner, 2005]. At its core, it represents a simpleLeaky Integrate-and-Fire (LIF) model but features a detailedspiking behavior as well as spike-triggered and sub-thresholdadaption. It was found to correctly predict approximately96 % of the spike times of a Hodgkin-Huxley-type modelneuron and about 90 % of the spikes recorded from a corticalneuron [Jolivet et al., 2008]. On the HMF and thus alsoin the following simulations, the neurons are paired withconductance-based synapses allowing for a fine-grainedcontrol of the synaptic currents and the implementationof e.g. shunting inhibition. orting HTM Models to the Heidelberg Neuromorphic Computing Platform Implementing neural network models for a neuromorphichardware platform or dynamic software simulations requiresan abstract network description defining the individual cellpopulations as well as the model’s connectivity. For thiswork, our primary focus was on developing mechanisticand functional implementations of the software referencemodels while staying within the topological and parameterrestrictions imposed by the hardware platform. A moredetailed biophysical approach should begin with simulationsof single HTM neurons and their dendritic properties beforeadvancing to more complex systems, e.g. full networks.In the following sections we describe spatial poolerand temporal memory models that incorporate basicHTM properties. These models are able to reproduce thefundamental behaviour of existing software implementations.The simulations were set up in Python using the PyNNlibrary [Davison et al., 2008]. Besides supporting a widerange of software simulators, this high-level interface is alsosupported by the HMF platform [Billaudelle, 2014a]. NESTwas used as a simulation backend [Gewaltig and Diesmann,2007]. To enable multiple synaptic time constants per neuron,a custom implementation of the AdEx model was written.
At its core the spatial pooler resembles a k -Winners-Take-All (kWTA) network: k out of m columns are chosen tobe active in each time step. In fact, kWTA networks haveoften been mentioned as an approximation for circuitsnaturally occurring in the neocortex [Felch and Granger,2008]. Continuous-time and VLSI implementations of suchsystems have been discussed in the literature [Erlanson andAbu-Mostafa, 1991, Tymoshchuk, 2012, Maass, 2000]. Inthe implementation below we describe a novel approach tomaintaining stable sparsity levels even with a large numberof inputs.The network developed for this purpose is presented inFigure 2. It follows a purely time-based approach and isdesigned for LIF neurons. It allows for very fast decisionprocesses based on a single input event per source. Eachcolumn is represented by a single cell which accumulatesfeed-forward input from the spike sources. Here, the risetime of the membrane voltage decreases with the numberof presynaptic events seen by the cell: cells receiving themost input will fire before the others. An inhibitory poolconsisting of a single cell collects the network’s activity.Low membrane and high synaptic time constants lead toa reliable summation of events. When a certain number ofspikes have been collected – and thus the cell’s thresholdhas been crossed – the pool strongly inhibits all cells of thenetwork suppressing subsequent spike events.The model is extended by adding subtle feed-forwardshunting inhibition. The inhibitory conductance increases IC Fig. 2.
Timing based implementation of the spatial pooler. Eachcolumn is represented by a single cell C and receives sparse inputfrom the input vector . The columns become active when thenumber of connected active inputs crosses a threshold. The risetime of the membrane voltage highly depends on the number ofcoincident inputs: cells with more presynaptic activity will firebefore those with less stimuli do. Inhibitory pool I accumulatesthe columnar spikes and in doing so acts as a counter. Aftera certain number of columns have become active, the pool willinhibit and shut down all columns preventing any further activity . To stabilize this kWTA model, all columns receive a subsampledfeed-forward inhibition . This effectively prolongs the decisionperiod for high input activity. with the overall input activity ν in . With the reversalpotential set to match the leakage potential, the conductancecontributes to the leakage term g (cid:48) l = g l + g inh ( ν in ) . This effectively slows down the neurons’ responses andthus prolongs the decision period of the network. Withthis inhibition, the resulting system is able to achieve stablesparsity levels with a large number of inputs, at the cost ofslightly slower response times.Tie situations between columns receiving the samenumber of presynaptic events are resolved by adding slightgaussian jitter to the weights of the excitatory feed-forwardconnections. This gives some columns structural advantagesover other columns resulting in a slightly faster response tothe same stimulus. By increasing the standard deviation σ j of the jitter, the selection criterion can be blurred. Similar to the spatial pooler, the temporal memoryimplementation was designed for fast reaction timesand spike-timing based response patterns. A completenetwork consists of m identical columns with n HTMcells each. Modelling these cells is a challenge in itself. Amulticompartmental neuron model would represent the bestfit. While a neuromorphic hardware chip implementing such illaudelle et al. D IP S s i n g l e p r e d i c t i v e e l e m e n t Fig. 3.
Implementation of the temporal memory not includingplasticity. Every HTM cell within a column is modeled withthree individual LIF cells modeling different compartments (distaldendrites D , soma S and a lateral inhibition segment I – which is notbiologically inspired). Per column, there exist multiple cell triplesas well as one “head” cell P which participates in the columnarcompetition and collects proximal input for the whole column.Activity of this cell is forwarded to the individual soma cells of thecolumn . Without a previous prediction, this results in all somacells firing. However, the distal compartment sums over the input ofthe previous time step. When a threshold is reached, the inhibitorycompartment as well as the soma are depolarized
33 44 . Togetherwith proximal input , the inhibitory partition fires and inhibitsall other cells in the column . a model is planned and first steps in that direction havealready been taken [Millner, 2012], the current system doesnot provide this feature. Since HTM cells primarily dependon the active properties of a compartment, it can be modelledby a triple of individual LIF cells as shown in Figure 3.A column collects proximal input using a single cell. In fact,this cell can be part of a spatial pooler network as presentedin section 2.1. When the column becomes active, this cellemits a spike and excites both the neurons representing theHTM cells’ somae as well as inhibitory cells. The inhibitoryprojection, however, is not strong enough to activate thetarget compartment alone. Instead, it only leads to a partialdepolarization. The soma neuron, however, reaches the firing threshold for a single presynaptic event. This sufficesas a columnar bursting mechanism (i.e. temporal memoryproperty 3): without predictive input, all soma compartmentswill fire as a response to the proximal stimulus.Distal input is processed for each cell individually bytheir dendritic segment compartments. A cell’s dendriticsegment receives input from other cells’ somae. When itsfiring threshold is crossed, it partly depolarizes the inhibitoryhelper cell of the same triplet. This synaptic projection isset up with a relatively long synaptic time constant anda reversal potential matching the threshold voltage. Thisensures that the predictive state is carried to the next timestep and prohibits the cell from becoming active due to distalinput alone. On proximal input, the already depolarizedhelper cell fires before the somatic compartments. The latterare then inhibited instantly, with the exception of the owntriplet’s soma. As described, this basic predictive mechanismfails when multiple cells are predicted, since the inhibitorycompartments laterally inhibit every cell. The solution isto also depolarize the somatic compartments of predictedcells. In summary this mechanism satisfies temporal memoryproperties 1 and 2. The network models presented in the previous sectionwere simulated in software to investigate their behavior.In the following, respective experiments and their resultsare shown. Additionally, plasticity rules and topologicalrequirements are discussed in respect of the HMF.
The spatial pooler was analyzed for a network spanning1,000 columns and an input vector of size 10,000. To speedup the simulation, the input connectivity was preprocessedin software by multiplying the stimulus vector to theconnectivity matrix.A first experiment was designed to verify the basickWTA functionality. A random pattern was presented tothe network. The number of active inputs per column – theinput overlap score – can be visualized in a histogram asshown in Figure 5. By highlighting the columns activated bythat specific stimulus, one can investigate the network’sselection criteria. Complying with the requirements fora spatial pooler, only the rightmost bars – representingcolumns with the highest input counts – are highlighted.Furthermore, the model’s capability to resolve ties betweencolumns receiving the same input counts is demonstrated:the bar at the decision boundary was not selected as a wholebut only a few columns were picked. This verifies spatialpooler property 2.In a second scenario, input vectors with varying sparsitywere fed into the network, as shown in Figure 6. The number orting HTM Models to the Heidelberg Neuromorphic Computing Platform
20 30 40 50 60
Input Events
All ColumnsActive Columns
Fig. 5.
Histogram showing the distribution of overlap scoresindividual columns receive. Columns activated by the spatial poolernetwork are highlighted. This confirms that only competitorswith the highest input enter an active state. Furthermore, tiesituations between columns with the same overlap score areresolved correctly. of active columns stays approximately constant across awide range of input sparsity. Additionally the plot showsthat columns must receive a minimum amount of input tobecome active at all. This verifies the underlaying kWTAapproach as well as spatial pooler properties 1 and 4.To verify the general functionality of a spatial pooler,expressed in property 3, a third experiment was set up. Inputdata sets with a variable overlap were generated startingfrom an initial random binary vector. For each stimulus, theoverlap of the columnar activity with the initial dataset wascalculated while sweeping the input’s overlap. The resultingrelation of input and output overlap scores is shown inFigure 7. Also included are the results of a similar experimentperformed with a custom Python implementation of thespatial pooler directly following the original specification[Hawkins et al., 2011]. Multiple simulation runs all yieldedresults perfectly matching the reference data, thus verifyingproperty 3.The experiments have shown that the model presented inthis section does fulfill the requirements for a spatial poolerand can be considered a solid kWTA implementation. Thespecific results of course depend on the individual networksize and configuration. In this case, the network – mostimportantly the columnar neurons’ time constants – wasconfigured for a relatively short time step of T =
50 ms.By choosing different parameter sets, the network can betuned towards different operational scenarios, e.g. furtherincreasing the model’s stability.The temporal memory was verified in a first sequenceprediction experiment. A reference software implementation
Input Sparsity [%] O u t p u t S p a r s i t y [ % ] Fig. 6.
The relative number of active columns is plotted againstthe input vector’s sparsity. After a certain level of input sparsityis reached, columns start to enter active states. With higherpresynaptic activity, columnar competition increases and theoutput sparsity reaches a plateu. The curve’s exact course canbe manipulated through the neurons’ parameters as can the size ofthe plateau. Error bars indicate the standard deviation across fivetrials.
Input Overlap O u t p u t O v e r l a p reference datasimulation 1simulation 2simulation 3simulation 4simulation 5 Fig. 7.
Output overlap as a dependency of the input vector’soverlap score. In each of the five simulation runs, the stimulus’was gradually changed starting from a random vector. As requiredfor a spatial pooler, two similar input stimuli get mapped to similaroutput patterns, while disjunct input vectors result in low overlapscores. The simulations fully reproduce data from an existingsoftware implementation which is also shown in this figure. was trained with three disjunct sequences of length three.Consecutive sequences were separated by a random inputpattern. The trained network’s lateral connectivity wasdumped and loaded in a simulation. When presented withthe same stimulus, the LIF-based implementation was ableto correctly predict sequences, as shown in Figure 8. illaudelle et al. Input Overlap O u t p u t O v e r l a p reference datasimulation 1simulation 2simulation 3simulation 4simulation 5 Fig. 9.
Dependency of output and input overlap for a trained spatialpooler. Results of five independent simulation runs are shown aswell as reference data from a custom software implementation.
Implementing online learning mechanisms in neuromorphichardware is a challenge, especially for accelerated systems.Although the HMF features implementations of nearest-neighbor Spike-Timing Dependent Plasticity (STDP) andShort Term Plasticity (STP) [Friedmann, 2013a, Billaudelle,2014b], more complex update algorithms are hard toimplement. Numenta’s networks rely on structural plasticityrules which go beyond these mechanisms.The spatial pooler’s stimulus changes significantly forlearned input patterns. Verification of its functionality underthese conditions is important. In order to follow the HTMspecification as closely possible, a supervised update rule wasimplemented in an outer loop: for each time step, a matrixcontaining the connections’ permanence values is updatedaccording to the activity patterns of the previous time step.This allows us to implement the concepts of structuralplasticity presented in the original whitepaper. For the targetplatform, the learning algorithms could be implemented onthe Plasticity Processing Unit (PPU) which is planned forthe next version of the HICANN chip [Friedmann, 2013b].Simulation results of the implementation described aboveare shown in Figure 9.Experiments to replace the HTM structural plasticityrules by a classic nearest-neighbor STDP model did notyield the desired results. The HTM learning rules requirenegative modifications to inactive synapses in segments thatcontribute to cell activity. In contrast, STDP does not affectinactive synapses.
Applying abstract network models to the hardware platformrequires algorithms for placing the neuron populations and routing the synaptic connections. In a best-case scenario,this processing step results in an isomorphic projection ofthe network graph to the hardware topology. For networkswith extreme connectivity requirements, however, synapticlosses must be expected.Mapping the simulated networks does not represent achallenge for the routing algorithms. The temporal memorycan be projected to a single wafer without synaptic loss. Thesame still applies with assumed lateral all-to-all connectivityresulting in approximately 2 million synapses. The latternetwork corresponds to a network with a potential poolof 100 % which would allow the exploration of learningalgorithms even without creating and pruning hardwaresynapses.On the hardware platform, a tradeoff between the numberof afferent connections per cell and the number of neuronsmust be taken into consideration: while it is possible toconnect the dendritic membrane circuits such that a singleneuron can listen on roughly 14 × synases, such anetwork could only consist of approximately 3 × neuronsper wafer. With just 226 synapses, just under 200 × neurons can be allocated per wafer.Scaling up the proof-of-concept models to a size usefulfor production purposes, however, challenges the hardwaretopology as well as the projection algorithms.A minimal useful HTM network spans 1024 columnswith 8 cells each. In such a scenario the neurons wouldreceive lateral input on 32 dendritic segments. Allowingapproximately 1 × afferent connections per dendriticsegment, this network could be realized on approximately1 × dendritic membrane circuits, or six wafers. Theexisting system set up for the BSS would suffice for thisscenario. Even larger networks could be brought to the HBP’splatform. Implementing machine intelligence algorithms as spikingneural networks and porting them to a neuromorphichardware platform presents high demands in terms ofprecision and scalability.We have shown in this paper that HTMs can besuccessfully modeled in dynamic simulations. The basicfunctionality of spatial pooler and temporal memorynetworks could be reproduced based on AdEx neurons.In theory, the proof of concept networks can be easilytransferred to the HMF, since the high-level softwareinterfaces are designed to be interchangable. Of course,emulating the models on the actual hardware platform willbring up a new set of challenges.Adapting the HTM’s learning rules to the native plasticityfeatures available on the HMF has turned out to be nontrivial.The learning rules could not be replicated with the current orting HTM Models to the Heidelberg Neuromorphic Computing Platform implementation of classic STDP. As a freely programmablemicroprocessor directly embedded into the neuromorphiccore, the PPU provides the ability to extend the system’splasticity mechanisms in order to implement the HTM rules.Further investigation is required to map out a completeimplementation of the HTM update rules on the PPU.Analog neuromorphic hardware is susceptible to transistormismatches due to e.g. dopand fluctuations in the productionprocess [Mihai A. Petrovici, 2014]. A careful calibrationof the individual neurons is required to compensate forthese variations. Due to the complexity of the problemand the high number of interdependent variables, a perfectcalibration is hard to accomplish. Therefore, network modelsare required to be tolerant regarding certain spatial, and trial-to-trial variations on the computing substrate. Carrying outadditional Monte Carlo simulations with slightly randomizedparameters is important to investigate the robustness of thepresented networks.Finally, a multicompartmental neuron model is plannedfor a later version of the neurmorphic platform. Making useof this extended feature set will significantly increase thelevel of biophysical detail. This will account for the detaileddendritic model used in HTMs and help to stay closer to thewhitepaper as well as the reference implementation.Besides paving the road towards a highly acceleratedexecution of HTM models, the HMF offers a highlevel of detail in its neuron implementation. With themulticompartmental extension and a flexible plasticityframework, we anticipate the platform will prove valuableas a tool for further low-level research on HTM theories. ACKNOWLEDGEMENTS
Special thanks to Jeff Hawkins, Prof. Dr. Karlheinz Meier,Paxon Frady, and the Numenta Team.
REFERENCES
Sebastian Billaudelle. PyHMF – eine PyNN-kompatibleSchnittstelle für das HMF-System, 2014a.Sebastian Billaudelle. Characterisation and calibration ofshort term plasticity on a neuromorphic hardware chip.Bachelor’s thesis, Universität Heidelberg, 2014b.Romain Brette and Wulfram Gerstner. Adaptive exponentialintegrate-and-fire model as an effective description ofneuronal activity.
Journal of neurophysiology , 94(5):3637–3642, 2005.Andrew P Davison, Daniel Brüderle, Jochen Eppler, JensKremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet,and Pierre Yger. Pynn: a common interface for neuronalnetwork simulators.
Frontiers in neuroinformatics , 2, 2008.Ruth Erlanson and Yaser S Abu-Mostafa. Analog neuralnetworks as decoders. In
Advances in neural information processing systems , pages 585–588, 1991.Andrew C Felch and Richard H Granger. The hypergeometricconnectivity hypothesis: Divergent performance of braincircuits with different synaptic connectivity distributions.
Brain research , 1202:3–13, 2008.Simon Friedmann.
A new approach to learning inneuromorphic hardware . PhD thesis, Heidelberg, Univ.,Diss., 2013, 2013a.Simon Friedmann.
A new approach to learning inneuromorphic hardware . PhD thesis, Heidelberg, Univ.,Diss., 2013, 2013b.SB Furber, F Galluppi, S Temple, and LA Plana. The spinnakerproject. , (99):1–14, 2014.Marc-Oliver Gewaltig and Markus Diesmann. Nest (neuralsimulation tool).
Scholarpedia , 2(4):1430, 2007.Jeff Hawkins, Subutai Ahmad, and Donna Dubinsky.Cortical Learning Algorithm and Hierarchical TemporalMemory, 2011. URL http://numenta.org/resources/HTM_CorticalLearningAlgorithms.pdf.HBP SP9 partners.
Neuromorphic Platform Specification .Human Brain Project, March 2014.A.L. Hodgkin and A.F. Huxley. A quantitative description ofmembrane current and its application to conduction andexcitation in nerve.
Journal of Physiology , 117:500–544,1952.Renaud Jolivet, Felix Schürmann, Thomas K Berger,Richard Naud, Wulfram Gerstner, and Arnd Roth.The quantitative single-neuron modeling competition.
Biological cybernetics , 99(4-5):417–426, 2008.Wolfgang Maass. Neural computation with winner-take-allas the only nonlinear operation. Citeseer, 2000.Paul Müller Oliver Breitwieser Mikael Lundqvist Lyle MullerMatthias Ehrlich Alain Destexhe Anders Lansner RenÃľSchüffny Johannes Schemmel Karlheinz Meier MihaiA. Petrovici, Bernhard Vogginger. Characterizationand compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms.
PLOS ONE
Development of a Multi-CompartmentNeuron Model Emulation . PhD thesis, Heidelberg, Univ.,Diss., 2013, 2012.Numenta, Inc. Numenta Platform for Intelligent Computing(NuPIC). URL http://numenta.org/.PavloV. Tymoshchuk. A continuous-time model ofanalogue k-winners-take-all neural circuit. In ChrisinaJayne, Shigang Yue, and Lazaros Iliadis, editors,
Engineering Applications of Neural Networks , volume 311of
Communications in Computer and Information Science ,pages 94–103. Springer Berlin Heidelberg, 2012. ISBN978-3-642-32908-1. illaudelle et al. − − − − − V d i s t a l [ m V ] Cell 1 − − − − − V s o m a [ m V ] (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) − − − − − V i n h [ m V ] − − − − − V d i s t a l [ m V ] Cell 2 − − − − − V s o m a [ m V ] (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) − − − − − V i n h [ m V ] − − − − − V d i s t a l [ m V ] Cell 3 t [ms] − − − − − V s o m a [ m V ] (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) − − − − − V i n h [ m V ] Fig. 4.
Neuron traces for a temporal memory column containing three HTM cells. Each of these cells is represented by a somaticcompartment, an inhibitory helper cells and two dendritic segments. The column is activated by proximal input in every time step andreceives random distal stimulus predicting none, one or more cells per step. As indicated by the automatic classification algorithm, thecolumn exhibits a correct response pattern to these predictions. orting HTM Models to the Heidelberg Neuromorphic Computing Platform C e ll I n d e x a1 C e ll I n d e x a2 C e ll I n d e x a3 Column Index C e ll I n d e x random b1b2b3 Column Index random c1c2c3
Column Index random
Fig. 8.