Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information
aa r X i v : . [ c s . A I] S e p Analyzing the Capacity of Distributed VectorRepresentations to Encode Spatial Information st Florian Mirus
Research, New Technologies, InnovationsBMW AG
Garching, Germanyfl[email protected] nd Terrence C. Stewart
Applied Brain Research Inc.
Waterloo, Ontario, [email protected] rd J ¨org Conradt
Dep. of Comp. Science and TechnologyKTH Royal Institute of Technology
Stockholm, [email protected]
Abstract —Vector Symbolic Architectures belong to a familyof related cognitive modeling approaches that encode symbolsand structures in high-dimensional vectors. Similar to humansubjects, whose capacity to process and store information or con-cepts in short-term memory is subject to numerical restrictions,the capacity of information that can be encoded in such vectorrepresentations is limited and one way of modeling the numericalrestrictions to cognition. In this paper, we analyze these limitsregarding information capacity of distributed representations. Wefocus our analysis on simple superposition and more complex,structured representations involving convolutive powers to encodespatial information. In two experiments, we find upper bounds forthe number of concepts that can effectively be stored in a singlevector only depending on the dimensionality of the underlyingvector space.
Index Terms —VSAs (Vector Symbolic Architectures), dis-tributed representations, capacity analysis, cognitive modeling
I. I
NTRODUCTION
Understanding and building cognitive systems has seenextensive research over the last decades leading to the de-velopment of several cognitive architectures. A cognitive ar-chitecture is a “general proposal about the representation andprocesses that produce intelligent thought” [1]. On the onehand, these architectures are used to explain and better under-stand important aspects of human behavior and intelligence.On the other hand, they are also used to design computers androbots mimicking certain cognitive abilities of humans.VSAs (Vector Symbolic Architectures) [2] refers to a familyof related cognitive modeling approaches that represent sym-bols and structures by mapping them to (high-dimensional)vectors. Such vectors are one variant of distributed repre-sentations in the sense that information is captured over alldimensions of the vector instead of one single number, whichallows to encode both, symbol-like and numerical structuresin a similar and unified way. Additionally, the architectures’algebraic operations allow manipulation and combination ofrepresented entities into structured representations. There areseveral architectures such as MAP (Multiply-Add-Permutate)[3], BSCs (Binary Spatter Codes) [4] and HRRs (HolographicReduced Representations) [5], which propose different com-pressed multiplication operations replacing the initially usedtensor product [6] and resulting in vectors with the same dimension as the input vectors. One advantage of this approachis that the number of dimensions remains fixed, independentof the number of entities combined through the architecture’salgebraic operations. Schlegel et al. [7] give an overview ofeight different variants of VSAs and compare their propertiesand characteristics.VSAs have been employed in a diverse variety of applica-tion domains, for instance, as one building block for imple-menting cognitive tasks such as RPMs (Raven’s ProgressiveMatrices) [8] in SNNs (Spiking Neural Networks) [9] forthe Spaun (Semantic Pointer Architecture Unified Network)model [10]. Furthermore, VSAs have been used for encod-ing and manipulating concepts [11] as well as for human-scale knowledge representation of language vocabularies [12].Kleyko et al. [13] used VSAs to imitate the concept learningcapabilities of honey bees. In robotics [14], VSAs have beenused to learn navigation policies for simple reactive behaviorsto control a Braitenberg-vehicle robot [15]. In previous work,we proposed an automotive environment representation basedon the SPA (Semantic Pointer Architecture), one particularVSA, and employed this representation to tasks such as contextclassification [16] and vehicle trajectory prediction [17]. Forthe latter [17], we used the convolutive power of vectors toencapsulate spatial positions of several vehicles in vectors offixed length (cf. Fig. 1 and Sec. II). Komer et al. [18] proposea similar representation of continuous space using convolutivepowers and analyze it from neural perspective.However, given the mathematical properties of VSAs, thereare systematical limitations to the amount of informationthat can be encoded in such a vector representation. Theselimitations are strongly connected to the chosen dimensionof the underlying vector space and are a feature of suchmodeling architectures for being able to model limitationsof cognitive functions of living beings, who are also notable to store unlimited amounts of information. Consideringhuman subjects for instance, the capacity to process and storeinformation or concepts in short-term memory as well as othercognitive tasks is subject to numerical restrictions [19]. Hence,numerical limitations of cognitive architectures like VSAs areone way of modeling the numerical restrictions to cognitionobserved in human subjects. In our context of interest, i.e.,automated driving [17], however, we need to analyze these y-position (m) −60−40−200204060 x - p o s i t i o n ( m ) prior motionf t re motionprior motion othersf t re motion othersego vehicle −10.0−7.5−5.0−2.50.02.55.07.510.0 y-coordinates −10.0−7.5−5.0−2.50.02.55.07.510.0 x - c oo r d i n a t e s −10.0−7.5−5.0−2.50.02.55.07.510.0 y-coordinates −10.0−7.5−5.0−2.50.02.55.07.510.0 x - c oo r d i n a t e s S i m il a r i t y Fig. 1. Visualization of the convolutive vector power encoding one particular driving scene in a 512-dimensional vector. The left plot depicts a scene from areal-world driving data set, while the middle and right plots visualize the similarity between the representation vector of that scene and auxiliary comparisonvectors created from a sequence of discrete values as heat map for the target vehicle (middle) and other cars (right). restrictions imposed by VSAs in general and the SPA inparticular to provide upper borders regarding the amount ofinformation that can be stored in our vector representation.
A. Contribution
In this paper, we analyze the limits regarding informationcapacity of distributed representations with the goal of findingbounds for, e.g., the number of concepts that can effectivelybe stored in a single vector before the accumulation of noisemakes it impossible to retrieve the original individual vectors.Therefore, our contribution is a two-stage analysis: First, weanalyze the amount of information that can effectively bestored in a single vector through superposition (i.e., addition)of several concept vectors. A similar but slightly differentexperiment has been conducted in [20]: the atomic vocabularyvectors, referred to as elemental vectors in [20], are sparse inthe sense, that they mostly contain 0 elements, and the super-posed vectors are normalized after adding them. Furthermore,[20] only compares the similarity between the superpositionand the original vector with the similarity between the originalvector and the most recently added random vector as baselinefor the expected similarity between randomly chosen vectors.In contrast, we calculate the similarity between the superpo-sition vector and n other random vectors for reference.Secondly, we analyze the information capacity of vectorrepresentations involving the convolutive vector power forencapsulating spatial information. Given our scene represen-tation proposed in [17] (cf. Fig. 1 and Sec. II), we areprimarily interested in representing two-dimensional values invectors, which is why we focus our analysis of the convolutivepower encoding scheme on this case. In our analysis, weshow that the information capacity is tightly linked to thedimension of the underlying vector space and furthermore,we give upper bounds for the capacity of superposition andconvolutive power representations for three different vectordimensionalities.II. M ATERIALS AND M ETHODS
A. Convolutive vector-power
The SPA [9] is based on Plate’s HRRs [5], which is onespecial case of a Vector Symbolic Architecture [2]. Here, atomic vectors are picked from the real-valued unit sphere,the dot product serves as a measure of similarity, which wedenote by φ , and the algebraic operations are component-wisevector addition ⊕ and circular convolution ⊗ . In this work,we make use of the fact that for any two vectors v , w , we canwrite v ⊗ w = IDFT ( DFT ( v ) ⊙ DFT ( w )) , (1)where ⊙ denotes element-wise multiplication, DFT and IDFTdenote the Discrete Fourier Transform and Inverse DiscreteFourier Transform respectively. Using Eq. (1), we define the convolutive power of a vector v by an exponent p ∈ R as v p : = ℜ (cid:16) IDFT (cid:16) ( DFT j ( v ) p ) D − j = (cid:17)(cid:17) , (2)where ℜ denotes the real part of a complex number. Further-more, we call a vector u unitary , if k v k = k v ⊗ u k for anyother v (see [5, Sec. 3.6.3 and 3.6.5] for more details on theconvolutive power and unitary vectors).Finally, we consider any two vectors similar, if their simi-larity is significantly higher than what we would expect fromtwo randomly chosen vectors. For growing dimension D , thecosine similarity follows approximately a normal distribution N µ , σ , with µ = σ = √ D [21]. Using the three-sigma-rule, we denote ε weak = √ D as weak similarity threshold and ε strong = √ D as strong similarity threshold . B. Vector representation
We are primarily interested in representing two-dimensionalvalues in vectors, which is why we focus our analysis ofthe convolutive power encoding scheme on this case. Hence,we encode two numerical values x , y , i.e., a two-dimensionalentity, by generating two random, unitary vectors X , Y repre-senting the corresponding units and applying Equation (2) V = X x ⊗ Y y . (3)To encode a sequence ( x i , y i ) for i = , . . . , n of two-dimensional numerical values all sharing the same units, we x-coordinates −10.0−7.5−5.0−2.50.02.55.07.510.0 y - c oo r d i n a t e s S i m il a r i t y −10.0 −7.5 −5.0 −2.5 0.0 2.5 5.0 7.5 10.0 x-coordinates S i m il a r i t y weak similarit thresholdstrong similarit thresholdactual x −10.0 −7.5 −5.0 −2.5 0.0 2.5 5.0 7.5 10.0 -coordinates S i m il a r i t y weak similarity thresholdstrong similarity thresholdactual y (a) Convolutive power encoding for one two-dimensional numerical entity −10.0−7.5 −5.0 −2.5 0.0 2.5 5.0 7.5 10.0 x-coordinates −10.0−7.5−5.0−2.50.02.55.07.510.0 y - c oo r d i n a t e s S i m il a r i t y −10.0 −7.5 −5.0 −2.5 0.0 2.5 5.0 7.5 10.0 x-coordinates S i m il a r i t y weak similarit thresholdstrong similarit thresholdactual x −10.0 −7.5 −5.0 −2.5 0.0 2.5 5.0 7.5 10.0 -coordinates S i m il a r i t y weak similarity thresholdstrong similarity thresholdactual y (b) Convolutive power encoding for two two-dimensional numerical entitiesFig. 2. Visualization of the convolutive power encoding scheme for 512-dimensional representation vectors depicting the similarity between the representationvector and auxiliary comparison vectors created from a sequence of discrete values. The left plot in both rows shows a two-dimensional grid of the similarities,while the middle and right plot show the individual entities respectively. The red circles in the left plot and the dashed blue lines in the middle and right plotsindicate the actual encoded values. simply sum up a their individual encoding vectors generatedvia Equation (3), which leads to V = n ∑ i = X x i ⊗ Y y i . (4)Figure 2 visualizes vectors encoding one (cf. Fig. 2a andEquation (3)) and two numerical entities (cf. Fig. 2b andEquation (4)) given by two units within one vector. To generatethe similarities shown in Fig. 2, we calculate the dot productbetween the vectors actually representing the encoded valuesand vectors ˜ V i = X ˜ x i ⊗ Y ˜ y i encoding a sequence of discretesample values ( ˜ x i , ˜ y i ) for i = , . . . , m . The left plot in eachrow of Fig. 2 depicts the similarities as heat map over atwo-dimensional grid. The middle and right plots in Fig. 2visualize the similarities of each unit, which is similar toplotting the heat map in three dimensions as ridges andslicing them through one of the ground axes. In both rows,we observe high similarity peaks way above both similaritythresholds at the actual encoded values and significantly lowersimilarity values everywhere else. However, comparing thesimilarities at the positions of the encoded values, we observea drop of similarity values from roughly 0 . . XPERIMENTS
In this section, we conduct our analysis regarding thecapacity of distributed representations for simple superpositionin Sec. III-A and representations employing the convolutivepower in Sec. III-B. The code for reproducing our analysis,results and figures can be found online . A. Superposition capacity
First, we evaluate the capacity of superposition, i.e., theaddition operation. Superposition is used to store and combineseveral concept vectors v i for i = , . . . , n in an unordered set s = n ∑ i = v i . (5)We can determine if a vector of interest w belongs to thatordered set by calculating the similarity φ ( s , w ) between thesuperposition vector and the vector of interest. For sufficientlyhigh-dimensional vectors, the similarity φ ( s , w ) will be closeto 0 in case the vector w is not part of the sum. However, themore vectors we add to the superposition vector s , the more https://github . com/fmirus/spa capacity analysis S i m il a r i t y Dimension = 256
20 40 60 80 100 120 140 160 180 200
20 40 60 80 100 120 140 160 180 200 weak similarity thresholdstrong similarity thresholdmemberno member
Fig. 3. Visualization of the SPA’s superposition capacity for vector dimensions 256, 512 and 1024. The blue boxes indicate the similarity between thesuperposition vector and its summands, the orange boxes illustrate the similarity between the superposition vector and other randomly generated vectors. Thedotted lines visualize the similarity threshold based on the vector dimensionality for reference. noise accumulates in the representation and thus decreases thesimilarity between the superposition vector s and its individualingredients v i . In order to analyze how many vectors canbe added together by superposition before individual vectorsbecome irretrievable, we conducted the following experiment:assuming we want to add n vectors v i for i = , . . . , n intoa superposition vector s as in Equation (5), we randomlygenerate a vocabulary of 2 n vectors v i for i = , . . . , n andsum up the first n members to create our superposition vector s . Then we calculate the cosine similarity φ ( s , v i ) between thesuperposition vector s and every vector v i for i = , . . . , n inthe vocabulary.Figure 3 shows the result of our experiment for 3 randomvocabularies per superposition length containing vectors ofdimension 256, 512 and 1024. The blue boxes in each figureillustrate the similarity between the superposition vector s andeach of the individual vectors v i for i = , . . . , n it contains, i.e.,the members of the unordered superposition set. The orangeboxes depict the similarity between s and the other vocabularyvectors v i for i = n + , . . . , n it does not contain, i.e., thenon-members. The dotted red and green lines indicate theSPA’s weak and strong similarity threshold depending on thedimension of the vector space. Considering the weak similaritythreshold ε weak = √ D , we observe that for a vector dimensionof 256 the SPA allows roughly 50 items to be stored in asuperposition vector. For higher vector dimensions 512 and1024, the number of items that can be superposed increasesto roughly 100 and 200 respectively. Considering the strongsimilarity threshold ε strong = √ D , the upper borders for the number of items being stored in a superposition vector areslightly more conservative with 25, 50 and 100 for vectorspace dimensions of 256, 512 and 1024 respectively. Wealso observe in our experiments that the similarity betweenthe superposition vector and non-member random vectors isconsistently below the weak similarity threshold ε weak for themajority of the samples. However, once the similarity betweenthe superposition vector and its members drops below eitherof the similarity thresholds for the majority of the samples, wecan not distinguish between members and non-members witha sufficiently high probability. For 256 dimensional vectors forinstance, we even observe that the members and non-membersbecome nearly indistinguishably when adding more than 80vectors. Thus, we have to choose rather conservative boundsfor the number of items to be encoded in a superpositionvector. B. Capacity of structured representations involving convolu-tive powers
In the previous section, we have analyzed the SPA’s capacityregarding the number of items that can be stored in anunordered set using superposition. For encoding more complexinformation, such as driving situations [17], in a semanticvector substrate, we employ more complex representation thansuperposition of single items alone. In this section, we analyzethe capacity of structured vector representations involving theconvolutive vector-power (see Eq. (2)). Figure 2 illustrates thatwe can unbind positions by querying the representation vectorwith sample vectors encoding discrete position examples, butalso that encoding several entities of the same type in one S i m il a r i t y Dimension = 256 weak similarity thresholdstrong similarity thresholdmemberno_member
Fig. 4. Capacity analysis for the superposition of vectors encoding spatial positions using the convolutive vector-power for varying vector dimensions. position vector as in Fig. 2b yields lower similarities of thetrue positive positions compared to the encoding of only oneitem as in Fig. 2a. Hence, our capacity analysis has to cover theamount of objects encoded in one vector, but also the numberof items per object class.Therefore, we conduct the following experiment: assumingwe want to encode n spatial entities, i.e., objects o i withtwo-dimensional location information ( x i , y i ) for i = , . . . , n as shown in Fig. 2 for n = n = s , we generate a vocabularyof random vectors v i for i = , . . . , n encoding object classlabels and random unitary vectors X , Y to encode the units ofthe spatial information. In contrast to the experiment in III-A,where we simply summed up a certain number of randomvectors, we are interested in a more specific analysis, sincethere are several possibilities to distribute the positional values ( x i , y i ) over the available object class vectors v i . For instance,for a total number of two superpositions, i.e., n =
2, there aretwo possibilities to generate our representation vector, namely s = v ⊗ X x ⊗ Y y + v ⊗ X x ⊗ Y y , (6) s = v ⊗ X x ⊗ Y y + v ⊗ X x ⊗ Y y . (7)The vector s in Equation (6) encodes two objects of thesame type, while the vector s encodes occurrences of twodifferent object types at the given locations. As we are workingwith random vectors in this experiment, we can, without lossof generality, skip the vector encoding two objects of typerepresented by the vector v , which would yield a resultequivalent to Equation (6). More generally, we are interested in all sets C m , j = ( < k , . . . , k m ≤ n | m ≤ n and m ∑ i = k i = n ) (8)of natural numbers k i summing up to the total number ofobjects n ignoring permutations of the k i . We index the setswith j , since there potentially exist several possibilities todecompose n into sums of m natural numbers.In our experiments, for each number n of total objectsto be encoded in the vector representation, we calculateall possible sets C m , j (ignoring permutations) and generaterandom position values ( x i , y i ) for i = , . . . , n and a randomvocabulary as described above. For each set C m , j , we generatea representation vector s m , j = m ∑ i = k i ∑ l = v i ⊗ X x i ⊗ Y y i (9)as well as query vectors P i = X ˜ x i ⊗ Y ˜ y i encoding a sequenceof discrete sample values ( ˜ x i , ˜ y i ) for i = , . . . , M evenlydistributed over the length of the positional encoding grid.In other words, Equation (9) states, that each class label v i appears k i times yielding a sum of n objects. We query therepresentation vector for the position of each class by bindingit to the pseudo-inverse (see also [5], [9]) element ¯ v i for eachclass label vector, i.e., s m , j ⊗ ¯ v i ≈ k i ∑ l = X x i ⊗ Y y i , (10)and calculate the similarity with the discretized position vec-tors P k to get s i , k = (cid:12)(cid:12) φ ( s m , j ⊗ ¯ v i , P k ) (cid:12)(cid:12) . (11) S i m il a r i t y Dimension = 256 weak similarity thresholdstrong similarity thresholdmemberno_member
Fig. 5. Capacity analysis for the superposition of vectors encoding spatial positions using the convolutive vector-power for varying vector dimensions. Incontrast to Fig. 4, this figure illustrates the similarity for vectors containing spatial information for several objects of the same class.
For samples close to the originally encoded positions, i.e., | x i − ˜ x i | < ε and | y i − ˜ y i | < ε for a certain threshold ε (here,we use ε = . s i , k as positive similarity denoting amember of the representation vector. Otherwise, we considerthe similarity s i , k at position ( ˜ x i , ˜ y i ) not a member of therepresentation vector s m , j .Figure 4 shows the results of this capacity analysis regardingthe total number of superposed objects within the represen-tation vector for varying vector dimensions. In Sec. III-A,we have already analyzed the SPA’s capacity regarding thenumber of items that can be stored in an unordered set usingsuperposition. Similar to Fig. 2, we observe that the similarityof the non-members is in the order of magnitude of the simi-larity thresholds while the similarity for the member positiondecreases with a growing number of spatial items encoded inthe vector. However, for encoding more complex information,like automotive scenes [17] in a semantic vector substrate,we employ more complex representation than superpositionof single items alone.Therefore, Fig. 5 shows a different evaluation of the samedata showing the number of addition operations per class onthe x -axis. In contrast to Fig. 4, Fig. 5 illustrates the similarityfor vectors containing spatial information for several objectsof the same class. That is, Fig. 5 illustrates the similarityof vectors containing a specific number k of superpositionsper class on its x -axis independent of the total number ofsuperpositions. We observe, that not only the similarity of themembers decreases with growing number of superpositionsper class, but the similarity of the non-member increasesbeyond the weak similarity threshold. Similar to the simple superposition capacity analysis, we consider the point in theplots where the member similarities fall below the strong sim-ilarity threshold the upper border for the maximal number ofspatial objects per class to be encoded in this representationalsubstrate. For instance, this upper bound for 256 dimensionalvectors is 10 superpositions per class, which is roughly halfof the upper bound for the number of simple superpositions.For higher-dimensional vectors (here 512 and 1024), theselimits are beyond the evaluated number of superpositionsper class. Therefore, using at least 512 dimensional vectorsfor automotive scene representation yields a sufficiently highinformation capacity. On the other hand, we expect upperbounds for the higher dimensions similar to 256-dimensionalvectors, i.e., roughly half the number of superpositions asstated in Sec. III-A. IV. C ONCLUSION
In this paper, we analyzed the capacity of structured vectorrepresentations in VSAs based on simple superposition andsuperposition combined with the convolutive power encodingof spatial information. We provided a more detailed analysisof the superposition capacity compared to those available inthe literature, e.g. in [20]. Furthermore, we evaluated thecapacity of structured representations involving convolutivevector powers to encode spatial information, which, to thebest of our knowledge, is the first of its kind. Thereby, wefound upper bounds for the amount of information that caneffectively be encoded in such representations depending onthe dimension of the underlying vector space. These boundshave to considered in future work regarding distributed rep-esentations, for instance, of automotive scenes like in [17]to evaluate if the amount of information to be encoded in thevector representation is compliant with these bounds. This willallow a conclusive assessment of the limits of structured vectorrepresentations in general, whereas our particular focus is onautomotive context. R
EFERENCES[1] P. Thagard, “Cognitive Architectures,” in
The Cambridge Handbookof Cognitive Science , K. Frankish and W. Ramsey, Eds. CambridgeUniversity Press, 2012, pp. 50–70.[2] R. Gayler, “Vector Symbolic Architectures answer Jackendoff’s chal-lenges for cognitive neuroscience,” in
ICCS/ASCS International Con-ference on Cognitive Science , P. Slezak, Ed., University of New SouthWales. CogPrints, 2003, pp. 133–138.[3] ——, “Multiplicative Binding, Representation Operators and Analogy,”in
Advances in analogy research: Integration of theory and datafrom the cognitive, computational, and neural sciences , . B. N. K.D. Gentner, K. J. Holyoak, Ed. Sofia, Bulgaria: New BulgarianUniversity., 1998, pp. 1–4. [Online]. Available: http://cogprints . org/502[4] P. Kanerva, Sparse Distributed Memory . MIT Press, 1988.[5] T. Plate, “Distributed Representations and Nested Compositional Struc-ture,” Ph.D. dissertation, University of Toronto, 1994.[6] P. Smolensky, “Tensor Product Variable Binding and the Representationof Symbolic Structures in Connectionist Systems,”
Artificial Intelligence ,vol. 46, no. 1-2, pp. 159–216, 1990.[7] K. Schlegel, P. Neubert, and P. Protzel, “A comparison of VectorSymbolic Architectures,” p. arXiv:2001.11797, Jan. 2020. [Online].Available: https://arxiv . org/abs/2001 . Topics in Cognitive Science , vol. 3, no. 1, pp.140–153, 2011.[9] C. Eliasmith,
How to build a brain: A neural architecture for biologicalcognition . Oxford University Press, 2013.[10] C. Eliasmith, T. C. Stewart, X. Choo, T. Bekolay, T. DeWolf, Y. Tang,and D. Rasmussen, “A Large-Scale Model of the Functioning Brain,”
Science , vol. 338, no. 6111, pp. 1202–1205, 2012. [Online]. Available:http://science . sciencemag . org/content/338/6111/1202[11] P. Blouw, E. Solodkin, P. Thagard, and C. Eliasmith, “Concepts asSemantic Pointers: A Framework and Computational Model,” CognitiveScience , vol. 40, no. 5, pp. 1128–1162, 2016. [12] E. Crawford, M. Gingerich, and C. Eliasmith, “Biologically Plausible,Human-Scale Knowledge Representation,”
Cognitive Science , vol. 40,no. 4, pp. 782–821.[13] D. Kleyko, E. Osipov, R. W. Gayler, A. I. Khan, and A. G.Dyer, “Imitation of honey bees’ concept learning processes usingVector Symbolic Architectures,”
Biologically Inspired CognitiveArchitectures . sciencedirect . com/science/article/pii/S2212683X15000456[14] P. Neubert, S. Schubert, and P. Protzel, “An introductionto hyperdimensional computing for robotics,” KI - K¨unstlicheIntelligenz , vol. 33, no. 4, pp. 319–330, 2019. [Online]. Available:https://doi . org/10 . Proc. of International Conference on Intelligent Robotsand Systems (IROS), Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics , 2016.[16] F. Mirus, T. C. Stewart, and J. Conradt, “Towards cognitive automotiveenvironment modelling: reasoning based on vector representations,” in , 2018, pp. 55–60.[17] F. Mirus, P. Blouw, T. C. Stewart, and J. Conradt, “An investigationof vehicle behavior prediction using a vector power representationto encode spatial positions of multiple objects and neural networks,”
Frontiers in Neurorobotics . frontiersin . org/article/10 . . . . Cognitive ScienceSociety, 2019.[19] G. A. Miller, “The magical number seven, plus or minus two: some lim-its on our capacity for processing information,” Psychological Review . ncbi . nlm . nih . gov/pmc/articles/PMC3540485/[21] D. Widdows and T. Cohen, “Reasoning with vectors: Acontinuous model for fast robust inference,” Logic Journal ofIGPL . ncbi . nlm . nih ..