String Tightening as a Self-Organizing Phenomenon: Computation of Shortest Homotopic Path, Smooth Path, and Convex Hull
aa r X i v : . [ c s . A I] F e b String Tightening as a Self-Organizing Phenomenon:Computation of Shortest Homotopic Path, Smooth Path, and ConvexHull
Bonny Banerjee ∗ , Ph.D.February 17, 2021 Abstract
The phenomenon of self-organization has been of specialinterest to the neural network community for decades. Inthis paper, we study a variant of the Self-Organizing Map(SOM) that models the phenomenon of self-organizationof the particles forming a string when the string is tight-ened from one or both ends. The proposed variant, calledthe String Tightening Self-Organizing Neural Network(STON), can be used to solve certain practical prob-lems, such as computation of shortest homotopic paths,smoothing paths to avoid sharp turns, and computationof convex hull. These problems are of considerable inter-est in computational geometry, robotics path planning,AI (diagrammatic reasoning), VLSI routing, and geo-graphical information systems. Given a set of obstaclesand a string with two fixed terminal points in a two di-mensional space, the STON model continuously tightensthe given string until the unique shortest configurationin terms of the Euclidean metric is reached. The STONminimizes the total length of a string on convergence bydynamically creating and selecting feature vectors in acompetitive manner. Proof of correctness of this any-time algorithm and experimental results obtained by itsdeployment are presented in the paper.
Index terms—
Path planning, homotopy, shortestpath, convex hull, smooth path, self organization, neuralnetwork.
Self-organization, as a phenomenon, has received con-siderable attention from the neural network communityin the last couple of decades. Several attempts have ∗ This research was supported by participation in the AdvancedDecision Architectures Collaborative Technology Alliance spon-sored by the U.S. Army Research Laboratory under CooperativeAgreement DAAD19-01-2-0009. This work was done while the au-thor was with the Laboratory for Artificial Intelligence Research,Department of Computer Science & Engineering, The Ohio StateUniversity, Columbus, OH 43210, USA. An earlier version of thisarticle was published as [1].E-mail: [email protected] been made to use neural networks to model different self-organization phenomena. One of the most well known ofsuch attempts is that of Kohonen’s who proposed theSelf-Organizing Map (SOM) [2] inspired by the way inwhich various human sensory impressions are topograph-ically mapped into the neurons of the brain. SOM pos-sesses the capability to extract features from a multidi-mensional data set by creating a vector quantizer by ad-justing weights from common input nodes to M outputnodes arranged in a two dimensional grid. At conver-gence, the weights specify the clusters or vector centersof the set of input vectors such that the point densityfunction of the vector centers tend to approximate theprobability density function of the input vectors. Severalauthors in different contexts reported different dynamicversions of SOM [2, 3, 4, 5, 6, 7, 8, 9, 10, 11].In this paper, assuming a string is composed of a se-quence of particles, we claim that the phenomenon un-dergone by the particles of the string, when the string ispulled from one or both ends to tighten it, is that of self-organization, by modeling the phenomenon using a vari-ant of SOM, called the String Tightening Self-OrganizingNeural Network (STON). We further use the proposedvariant to solve a few well-known practical problems -computation of shortest path in a given homotopy class,smoothing paths to avoid sharp turns, and computa-tion of convex hull. Other than theoretical considera-tions in computational geometry [12], computation ofshortest homotopic paths is of considerable interest inrobotics path-planning [13], AI (diagrammatic reason-ing) [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24], VLSI rout-ing [25], and geographical information systems. Smoothpaths are required for navigation of large robots inca-pable of taking sharp turns, and also for handling un-expected obstacles. To generate a path that is smooth,shorter, collision-free, and is homotopic to the originalpath requires generation of the configuration space of arobot which is computationally expensive and difficult torepresent [26]. Computation of convex hull finds numer-ous applications in computational geometry algorithms,pattern recognition, image processing, and so on. Theaim of this paper is to study the properties of STON and1ow it might be applied to solve some practical problemsas described above.The remainder of this paper is organized as follows.In the next section, the STON algorithm is describedassuming the given string is sampled at a frequency ofat least d/ d is the minimum distance betweenthe obstacles. Thereafter, an analysis of the algorithmis presented along with proof of its important propertiesand correctness. Section 4 discusses how STON might beextended when the above constraint on sampling is notmet. The extension is used for computation of shortestpath in a given homotopy class. Proof of correctness andcomplexity analysis of the extension are also included.Finally, simulation results are presented from computingshortest homotopic path, smooth path and convex hullusing both the original and extended algorithms. Thepaper concludes with a general discussion. A string π in two dimensional space ( ℜ ) might be de-fined as a continuous mapping π : [0 , → ℜ , where π (0) and π (1) are the two terminal points of the string.A string is simple if it does not intersect itself, otherwiseit is non-simple. Let π and π be two strings in ℜ sharing the same terminal points i.e. π (0) = π (0) and π (1) = π (1), and avoiding a set of obstacles P ⊂ ℜ .The strings π , π are considered to be homotopic toeach other or to belong to the same homotopy class, withrespect to the set of obstacles P , if there exists a contin-uous function Ψ : [0 , × [0 , → ℜ such that1. Ψ(0 , t ) = π ( t ) and Ψ(1 , t ) = π ( t ), for 0 ≤ t ≤
12. Ψ( λ,
0) = π (0) = π (0) and Ψ( λ,
1) = π (1) = π (1),for 0 ≤ λ ≤
13. Ψ( λ, t ) / ∈ P , for 0 ≤ λ ≤ ≤ t ≤ π i , specified in terms of sampled points,and a set of obstacles P , STON computes a string π s such that π i and π s belong to the same homotopy class,and the Euclidean distance covered by π s is the shortestamong all strings homotopic to π i . It is noteworthy that π s is unique and has some canonical form [27]. Assume a string wound around obstacles in ℜ with twofixed terminal points. A shorter configuration of thestring can be obtained by pulling its terminals. Theunique shortest configuration can be obtained by pullingits terminals until they cannot be pulled any more. The proposed algorithm models this phenomenon as a self-organized mapping of the points forming a given config-uration of a string into points forming the desired shorterconfiguration of the string. Let us consider a set of n datapoints or obstacles, P = { p , p , ...p n } , representing theinput signals, and a sequence of variable (say, k ) pro-cessors h q , q , ...q k i each of which (say q i ) is associatedwith a weight vector w i ( t ) at any time t . A weight vectorrepresents the position of its processor in ℜ . If the k processors are placed on a string in ℜ , the STON is ananytime algorithm for tuning the corresponding weightsto different domains of the input signals such that, onconvergence, the processors will be located in such a waythat they minimize a distance function, φ ( w ), given by φ ( w ) = k − X i =1 k w i +1 ( t ) − w i ( t ) k (1)where q i , q i +1 are two consecutive processors on thestring with corresponding weights w i ( t ), w i +1 ( t ) at anytime t . The algorithm further guarantees that the fi-nal configuration of the string formed by the sequenceof processors at convergence lies in the same homotopyclass as the string formed by the initial sequence of pro-cessors with respect to P . Thus, assuming fixed q and q k , the STON defines the shortest configuration of the k processors in an unsupervised manner. The phenomenonundergone by the particles forming the string is modeledby the processors in the neural network. The STON is initialized with a given number of con-nected processors, the weight corresponding to each ofwhich is initialized at a unique point on the given con-figuration of a string. A feature vector, presented tothe STON, is an attractor point in ℜ and is either cre-ated dynamically or chosen selectively from the givenset of input obstacles P . The weight vectors are up-dated iteratively on the basis of the created and chosenfeature space S ( t ); S ( t ) = { x ( t ) , x ( t ) , ...x k ( t ) } beingthe set of feature vectors at any time t . It is noteworthythat x i ( t ) is not necessarily unique. Unlike SOM andmany of its variants, randomized updating of weightsdoes not yield better results in the STON model; updat-ing weights sequentially or randomly both yield the sameresult in terms of output quality as well as computationtime. On convergence, the location of the processors rep-resenting the unique shortest configuration homotopic tothe given configuration of the string is obtained. A feature vector x i ( t ) is created if the triangular areaspanned by three consecutive processors q m − , q m , q m +1 does not contain any obstacle p j ∈ P . In that case,2 i ( t ) = w m − ( t ′ ) + w m +1 ( t ′′ )2 (2)where t ′ , t ′′ assumes the value t if the weight has notyet been updated in the current iteration or sweep andthe value t + 1 if the weight has been updated in thecurrent sweep, and 1 < m < k . If an obstacle, say p j ∈ P , lies within the triangular area spanned by thethree consecutive processors q m − , q m , q m +1 , the featurevector x i ( t ) is chosen to be x i ( t ) = p j (3)For this algorithm, we assume the given string is sam-pled such that there cannot exist multiple non-identicalobstacles within the triangular area spanned by any threeconsecutive processors (see Appendix A.1 for how toachieve such sampling). The STON evolves by means of a certain processor evo-lution mechanism, given by [28] w m ( t + 1) = w m ( t ) + α ( t )[ x j ( t ) − w m ( t )] (4)where α ( t ) is the gain term or learning rate which mightvary with time and 0 ≤ α ( t ) ≤ α ( t ) might be unityonly when feature vectors are created according to eq.2. All the weight vectors are updated exactly once in asingle sweep, indexed by t .If modification of weights is continued in this process,the processors tend to produce shorter configurations atthe end of each sweep. The weight vectors converge when k w i +1 ( t ) − w i ( t ) k < ǫ, ∀ i (5)where ǫ is a predetermined sufficiently small positivescalar quantity. STON is a variant of SOM. As in SOM, each featurevector in STON pulls the selected processors in a neigh-borhood towards itself as a result of updating in a topo-logically constrained manner, ultimately leading to anordering of processors that minimizes some residual er-ror. Neighborhood, in SOM, is something like a smooth-ing kernel over a two dimensional grid often taken asthe Gaussian which shrinks over time. In STON, allthose processors are included within the neighborhoodof a feature vector that form triangles with their adja-cent processors such that the feature vector lies withintheir triangles. Such a neighborhood is conceptually sim-ilar to that proposed in [29] where the neighborhood isnot dependent on time but on the nature of the inputsignals. STON incorporates competitive learning as theweights adapt themselves to specific chosen features of j p i q + i q i q - Figure 1: If p j lies inside the triangle formed by q i − , q i , q i +1 , then q i has to lie in the shaded region.the input signal defined in terms of the obstacles. Theresidual error is defined in SOM in terms of variancewhile in STON, in terms of Euclidean distance. Fromthe inputs the net adapts itself dynamically in an unsu-pervised manner to acquire a stable structure at conver-gence, thereby manifesting self-organization [30]. TheSTON possesses certain key properties which are dis-cussed in this section that eventually leads to the proofof correctness of the algorithm. Theorem 1.
The configuration of a string formed by thesequence of processors at initialization and the same atconvergence are homotopic.Proof.
We start by noting that, given a fixed set of ob-stacles and fixed terminal points, the homotopy class of astring can be altered only by crossing any obstacle. Theconfiguration of a string formed by the sequence of pro-cessors at any time t is obtained by updating the weightswith respect to the selected feature vectors at time t -1,the feature vectors being selected according to eq. 2 or3. In the first case, creation of a feature vector and up-dating the weight does not change the homotopy class ofthe string as there was no obstacle in the triangular area,hence updating the weight did not result in crossing anyobstacle.In the second case, a feature vector is selected by eq. 3from the set of obstacles. A processor is pulled towardsthe feature vector by updating its weight. It requires tobe proven that by such selection of feature vectors andupdating of weights, a string cannot cross any obstacle.Let q i − , q i , q i +1 be three consecutive processors on astring with corresponding weights being w i − ( t ′ ), w i ( t ), w i +1 ( t ′′ ); t ′ , t ′′ ∈ { t, t +1 } , and x i ( t ) = p j be the selectedfeature vector (see Fig. 1). In order to complete theproof we need to show p j will never be crossed if – (i)weight for processor q i is updated, and (ii) weight forone of the neighbors of q i (i.e. q i − or q i +1 ) is updated.First, note that in order for p j to be inside the triangleformed by q i − , q i , q i +1 , the processor q i has to be in theregion bounded by extensions of the lines joining q i − and q i +1 to p j (i.e. the shaded region in Fig. 1). Sinceupdating the weight for processor q i pulls it towards the3 j p i q + i q i q - Figure 2: The feature vector for updating the weight for q i +1 , if not p j , can lie only in the shaded region.obstacle p j along a straight line, q i continues to remainwithin the shaded region after updating, thereby neverletting the segments q i − q i and q i q i +1 cross the obstacle p j . That proves condition (i).Now let us consider the case when the weight for aneighbor of q i , say q i +1 without loss of generality, is up-dated. Let q i +2 be the other neighbor of q i +1 . Theneither the obstacle p j lies inside the triangle formed by q i , q i +1 , q i +2 , or it does not. If p j lies inside, it is the se-lected feature vector that pulls q i +1 towards itself and itwill never be crossed (due to condition (i)). If p j does notlie inside, then either there lies some other obstacle, say p j ′ , inside the triangle formed by q i , q i +1 , q i +2 , or theredoes not exist any obstacle inside. In the former case, p j ′ is the selected feature vector while in the later, thefeature vector is created according to eq. 2. In any case,the feature vector can lie only in the partition, boundedby the extensions of the lines joining q i and q i +1 to p j ,in which q i +1 lies (shaded region in Fig. 2). Thus, up-dating the weight for q i +1 will not make the string crossthe obstacle p j . In general, updating the neighbors of q i will not make the string cross p j , proving condition (ii).From this we conclude that updating a weight vec-tor with respect to a created or selected feature vectordoes not change the homotopy class of a string. Hence,the configurations of a string at the end of consecutivesweeps are homotopic. But homotopy is a transitive re-lation. This concludes the proof that the configurationsof a string at initialization and at convergence are homo-topic. Theorem 2.
The Euclidean distance covered by the con-figuration of a string formed by the sequence of processorsat time t + 1 is less than the same at time t .Proof. Let us consider a triangle formed by three consec-utive processors q i − , q i , q i +1 with corresponding weights w i − ( t ′ ), w i ( t ), w i +1 ( t ′′ ); t ′ , t ′′ ∈ { t, t + 1 } , on a configu-ration of a string (see Fig. 3). Let w i ( t +1) be the weight ( ) [ ] ' i i q w t - - [ ] A a ( ) [ ] i i q w t + ( ) [ ] '' i i q w t + + ( ) [ ] i i q w t Figure 3: The Euclidean distance covered by a configu-ration of the string from q i − [ w i − ( t ′ )] to q i +1 [ w i +1 ( t ′′ )]via q i [ w i ( t + 1)] is less than the same via q i [ w i ( t )].vector after updating w i ( t ) with respect to a feature vec-tor either created according to eq. 2 or chosen accordingto eq. 3. We are required to prove that the Euclideandistance covered by a configuration of the string from q i − [ w i − ( t ′ )] to q i +1 [ w i +1 ( t ′′ )] via q i [ w i ( t + 1)] is lessthan the same via q i [ w i ( t )].In order to prove that, we extend the line segment q i − [ w i − ( t ′ )] q i [ w i ( t + 1)] to intersect the line segment q i [ w i ( t )] q i +1 [ w i +1 ( t ′′ )] at a point, say A located at a .Then, from Fig. 3, using the Triangle Inequality, we get k w i − ( t ′ ) − w i ( t + 1) k + k w i ( t + 1) − a k < k w i − ( t ′ ) − w i ( t ) k + k w i ( t ) − a kk w i ( t +1) − w i +1 ( t ′′ ) k < k w i ( t +1) − a k + k a − w i +1 ( t ′′ ) k From the above inequalities, we get k w i − ( t ′ ) − w i ( t + 1) k + k w i ( t + 1) − w i +1 ( t ′′ ) k < k w i − ( t ′ ) − w i ( t ) k + k w i ( t ) − a k + k a − w i +1 ( t ′′ ) k Thus, at any time t , after updating a weight w i ( t ), wehave k w i − ( t ′ ) − w i ( t + 1) k + k w i ( t + 1) − w i +1 ( t ′′ ) k < k w i − ( t ′ ) − w i ( t ) k + k w i ( t ) − w i +1 ( t ′′ ) k i.e. updating w i ( t ) contributes to minimization of thesum of lengths of the segments q i − [ w i − ( t ′ )] q i [ w i ( t )] and q i [ w i ( t )] q i +1 [ w i +1 ( t ′′ )]. Each weight is updated exactlyonce in every sweep. Thus updating a weight contributesto minimization of length of the current configuration ofthe string at every sweep, thereby minimizing φ ( w ).Theorem 1 shows the STON algorithm guarantees thatthe final configuration π s of the string belongs to thesame homotopy class as its initial configuration π i . ByTheorem 2, it can be seen that if sufficient number ofsweeps are computed, the shortest configuration in thegiven homotopy class can be reached. Thus the proposedalgorithm is correct with respect to the goal of obtain-ing the shortest homotopic configuration of a string asdefined in section 2.2. This, however, does not guaran-4 l p j p ( ) [ ] i i w t q + ( ) [ ] '' i i w t q + + ( ) [ ] i i w t q ( ) [ ] ' i i w t q - - Figure 4: The STON algorithm might fail to perform cor-rectly if the given path is sparsely sampled, as illustratedin the example above. The configuration formed by q i − [ w i − ( t ′ )], q i [ w i ( t )], q i +1 [ w i +1 ( t ′′ )] and that formedby q i − [ w i − ( t ′ )], q i [ w i ( t + 1)], q i +1 [ w i +1 ( t ′′ )] are not ho-motopic to each other as the obstacle p l has been crossed.tee that the optimum solution will always be reached. Itis possible for the algorithm to get stuck at suboptimalsolutions, a situation that can be averted by choosing α much less than unity when feature vectors are selectedby eq. 3. We shall discuss this issue further in section 5. The solution to the problem of computing the shortesthomotopic path can be viewed as an instance of pulling astring to tighten it, where the given path corresponds tothe initial configuration of the string while the shortesthomotopic path corresponds to the tightened configura-tion of the same string. The STON assumes a stringto be sampled such that there exists at most one ob-stacle in the triangle formed by any three consecutiveprocessors. In this section, given a path π i , we proposean extension of STON to do away with that assumptionand apply the extension for computing the shortest ho-motopic path π s with respect to a given set of obstacles P , where π i might be simple or non-simple. The set ofobstacles P is specified as a set of points in ℜ with noassumption being made about their connectivity. Theinput path π i is specified in terms of a pair of terminalpoints and either a mathematical equation or a sequenceof points in ℜ . In the former case, the path is sampledto obtain the sequence of points in ℜ . When a path is sampled sparsely, it can no longer beguaranteed that at the time of choosing a feature vec-tor, there will exist only one obstacle within the trian-gular area spanned by any three consecutive processors.Hence, the algorithm might fail to perform correctly asTheorem 1 no longer holds true (see Fig. 4).Let us assume the given path is very sparsely sampled.In that case, whenever more than one obstacle point is i q - C i q + i q (a) The convex hull of the obstacle points lying in thepartition, formed by the line segments joining the pro-cessors q i − and q i +1 to C, adjacent to q i , is shown. i q + Useless processor i q i q + i q - C i q + (b) Introduction of new processors q i , q i +1 , q i +2 nearthe vertices of the convex hull. Figure 5: C is the centroid of all obstacle points lyingwithin the triangle formed by q i − , q i , q i +1 .encountered within a triangle formed by three consecu-tive processors q i − , q i , q i +1 , the centroid, say C, of theobstacle points lying within the triangle is computed.The line segments joining the processors q i − and q i +1 to C partitions the obstacle space within the triangleinto two disjoint parts. The convex hull of the obstaclepoints lying within the partition adjacent to the proces-sor q i is computed (see Fig. 5(a)). A new processor isintroduced and initialized near each vertex of the convexhull (see Fig. 5(b)). The indices of all processors andtheir corresponding weights are updated. The processor q i is considered as ”useless” and is deleted. A similarnotion of useless units has been used in [11].We claim, this extension of STON works correctly inall cases. Let us, for a contradiction, assume that thereexists an obstacle in the partition not adjacent to theprocessor q i (see Fig. 5(a)) at which a processor hasto be introduced in order to obtain the shortest homo-topic path. That is, the shortest homotopic path willpass through an obstacle in the partition not adjacent tothe processor q i . Let p j be such an obstacle point (seeFig. 6). The line from q i − through p j partitions thetriangle formed by q i − , q i , q i +1 into two disjoint parts.If the shortest homotopic path passes through p j , therecannot exist any obstacle point in the partition, formedby the line from q i − through p j , adjacent to q i . Butin that case, the centroid C cannot lie in the partition, See Appendix A.2 for details of how we introduce processorsnear each vertex of the convex hull. j p C i q + i q i q - Figure 6: The shortest homotopic path cannot passthrough any obstacle point lying in the partition, formedby line segments joining processors q i − and q i +1 to C,not adjacent to q i .formed by the line from q i − through p j , adjacent to q i .Hence, a contradiction. If the shortest homotopic pathpasses through p j , the centroid C will lie in the partition,formed by the line from q i − through p j , not adjacent to q i . In that case, the obstacle point p j lies in the par-tition, formed by line segments joining processors q i − and q i +1 to C, adjacent to q i . Hence the claim follows. Input:
Set of obstacles, Initial string configuration (i.e.initial sequence of processors)
Output:
Final string configuration (i.e. final sequenceof processors)1. initialize the weights2. t ←
03. while convergence criteria (eq. 5) not satisfied, do4. t ← t + 15. for each processor on path, do6. z ← number of obstacles inside triangle formedwith neighboring processors7. if z = 0,8. create feature vector (eq. 2)9. update weight (eq. 4)10. if z = 1,11. select feature vector (eq. 3)12. update weight (eq. 4)13. if z > O ( logn + m ) where n is the number of obstacle pointsand m is the number of obstacle points inside the tri-angle formed by a processor and its adjacent neighbors,0 ≤ m ≤ n . This complexity can be achieved by aone-time construction of a 2 D range tree of the obsta- cle points in O ( nlogn ) time. Querying the tree requires O ( logn + m ) time using fractional cascading [31]. On av-erage, m = nk where k is the number of processors. Thus,complexity of STON is O ( logn + n/k ) per processor persweep, assuming the input path has been sampled at halfthe minimum distance between the obstacles. The pur-pose of extending STON is to eliminate the constraint onsampling. As a result, steps 13 −
15 had to be introducedwhich uses the algorithm recursively for computing con-vex hull. Let T ( n ) be the complexity of extension ofSTON for each sweep and k be the number of processorsat the end of a sweep. Then, from the above algorithm,we get T ( n ) = k ( r c T ( m ) + logn + m ) (6)where r c is the number of sweeps required to computeconvex hull. The convex hull is computed to determinethe number and locations of new processors that haveto be introduced so that there does not exist more thanone obstacle in any triangle formed by three consecutiveprocessors. For this purpose, it is sufficient to computejust one sweep of the convex hull instead of a tight con-vex hull. This strategy saves computational costs as thenewly added processors will eventually not remain on theconvex hull of the obstacles but will remain on the path.Therefore, T ( n ) = k ( T ( nk ) + logn + nk ) = O ( n ( logk + log k n )) (7)Thus the complexity of extension of STON is O ( nk ( logk + log k n )) per processor per sweep. As thenumber of processors ( k ) increases, m →
1, and the com-plexity of extension of STON becomes O ( logn ). Thusthe extension of STON starts with a complexity of O ( nk ( logk + log k n )) and reaches a complexity of O ( logn )when no more processors are required to be introduced.It is interesting to note that the complexity of STON iscomparable with that of some of the recently proposedvariants of SOM [10, 32].In computational geometry, many researchers haveproposed algorithms to solve this problem with the pri-mary goal of minimizing computational complexity (referto [12] for a detailed review). Efrat et al [33] and Be-spamyatnikh [34] have independently proposed outputsensitive algorithms for the problem. Their algorithmstackle the problem for simple and non-simple paths indifferent ways, with the one for non-simple paths havinghigher complexity. Bespamyatnikh’s algorithm for non-simple paths achieves O ( log n ) time per output vertex.If the terminal points of a given path are not fixed, theresulting problem is NP-hard [35] which has not beendealt with in this paper.6igure 7: STON works incorrectly for large values of α .Circles represent point obstacles, while dark and lightlines represent initial and tightened configurations of apath respectively. In this section, we present experimental results obtainedby deploying STON to different data sets for differentpurposes. The extension of STON has been used to com-pute shortest homotopic paths, smooth paths, and con-vex hulls. In Fig. 7, the performance of STON is illus-trated assuming α is assigned a large value close to unitywhen feature vectors are chosen according to eq. 3. Inthat case, the algorithm might fail to perform optimallyas the final path might cling to undesired obstacles, asshown by the arrow in Fig. 7. This happens becauseonce a processor falls on an obstacle, which might hap-pen for some processors before convergence if α is large,the processor fails to let the obstacle go as the obstaclecontinues to remain within its triangle and the proces-sor has no memory of which direction it proceeded from.Such performance from STON can only be averted bychoosing α much smaller than unity when feature vectorsare selected by eq. 3. This provides ample time for theprocessors to distribute themselves along the path be-fore coming close to any obstacle. For our experiments, α was chosen as follows α ( t ) = (cid:26) . β (1 + tT ) if feature vector is selected (8)where β is the learning constant, 0 < β < .
5, and T is the total number of sweeps that STON is expected toconverge within. Typically, β and T are assigned val-ues 0.01 and 5000 respectively. Thus, initially a proces-sor proceeds slowly towards the chosen obstacle but therate of proceeding increases as more and more sweepsare computed. This prevents STON from converging atsuboptimal solutions. Throughout our experiments, ǫ ischosen to be 0.001% of the maximum distance coveredalong any one dimension by the obstacles.Fig. 8(a) illustrates a complicated configuration ofa path that has not been sampled uniformly. STONwas applied to shorten this path and the configurations (a) At initialization. (b) After first sweep. (c) After 5 sweeps. (d) After 10 sweeps. (e) After 15 sweeps. (f) After 20 sweeps. Figure 8: STON applied to a non-simple path that turnsout to be simple when tightened. Circles denote pointobstacles while dots denote locations of processors on thepath. (a) At initialization. (b) After 15 sweeps.
Figure 9: Performance of STON in a structured obstacleenvironment. Dotted lines represent the contours of theobstacles while firm lines represent paths.reached after 1, 5, 10, 15, 20 sweeps are shown in Fig. 8.It can be seen that the processors gradually distributethemselves evenly along the path due to their weightsbeing updated with respect to created feature vectors(eq. 2). This and the assignment of α according toeq. 8 help STON to avoid suboptimal convergence whichcould have been the case after 10 sweeps (see Fig. 8(d)).The correct shortest configuration was reached within 207 (a) At initialization. (b) After first sweep. (c) After 10 sweeps. (d) After 20 sweeps. Figure 10: STON finds a smooth homotopic path withrespect to a huge set (thousands) of obstacles. Dots rep-resent point obstacles while firm lines represent paths.sweeps.Fig. 9 illustrates the performance of STON in a struc-tured environment where obstacles are not point objectsbut are two dimensional shapes. The algorithm con-verged within 15 sweeps. In a structured environment,STON considers the points forming the two dimensionalshapes as point obstacles and does not use their connec-tivity information. The algorithm performs equally wellin structured as well as unstructured environments.In real world applications, such as navigational plan-ning of mobile robots [26] or route formation for militaryplanning [14, 15, 19, 21, 24], the absolute shortest pathis not always desired; a suboptimal path that is devoidof sharp turns is often more desirable in such cases. Fig.10 shows the capability of STON to produce such pathsby appropriately adjusting the parameter ǫ . In this case, ǫ was chosen to be 0.1%. The illustration in Fig. 10 fur-ther demonstrates the capability of STON to handle ahuge number of obstacles, in the range of a few thousand.STON can be used to compute the convex hull of aset of points, as shown in Fig. 11. A path has a startingand an ending point which are fixed and common for allpaths belonging to the same homotopy class. To exploitthis information for computation of shortest homotopicpaths, the first and the last processors, q and q k , ona path with k processors were assumed to be fixed andtheir corresponding weights were never updated. Com-putation of convex hull however does not require anyfixed processors, so weights corresponding to all pro-cessors were updated. The starting and ending pointswere assumed to be the same. Such a modification ofSTON makes it functionally similar to an elastic band (a) At initialization. (b) After 10 sweeps. Figure 11: STON computes the convex hull of a set ofpoints represented by circles. Lines represent the contourof the convex hull while dots on the contour representlocations of the processors. A v e r age s w eep s pe r p r o c e ss o r
1% error 10% error 0.1% error
Figure 12: Experiments show that average number ofsweeps per processor required for convergence usingSTON decreases with the increase in number of proces-sors.or a Snake [36].Experiments with a number of different data sets, afew of which are shown in Fig. 7 through 11, assumingthe learning constant β to be 10 − , reveal certain char-acteristics of STON. It is expected that the total num-ber of sweeps required for convergence increases with theincrease in number of processors. Outcomes of our ex-periments satisfy such expectations but they also showthat average number of sweeps per processor requiredfor convergence decreases with the increase in numberof processors (see Fig. 12). This is important in deter-mining how many processors to sample a path with asone should choose the optimum number of processors forminimizing computational costs. The errors in Fig. 12refer to the ratio of the length of the shortened path atconvergence with respect to the length of the shortestpath.The learning rate is an important factor for ensur-ing faster convergence. The number of processors beingfixed, total number of sweeps required for convergence8 b s w eep s ( l og sc a l e ) Figure 13: As the learning constant decreases, the num-ber of sweeps required for convergence using STON in-creases.increases with decrease in learning constant β (see Fig.13). We experimented with a number of data sets includ-ing those shown in Fig. 7 through 11, sampling the pathswith 30, 59, 88, 117, 146, 175 processors at different lo-cations and varying the learning constant β from 10 − to 0.2 for each data set. Choosing a very high learningconstant might lead to suboptimal results as has beenillustrated in Fig. 7. It is interesting to note from Fig.13 that for low learning constants, such as 10 − or lesser,the total number of sweeps required for convergence ismore for lesser number of processors. This observationonly reinforces the fact that average number of sweepsper processor decreases with the increase in number ofprocessors for low learning constants. A self-organizing neural network algorithm STON is pro-posed that models the phenomenon undergone by theparticles forming a string when the string is tightenedfrom one or both of its ends amidst obstacles. Discus-sions of the properties and correctness of this anytimealgorithm is presented assuming the given string is sam-pled at a frequency of at least d/ d is the min-imum distance between the obstacles. It is shown howSTON might be extended for tightening strings whenthe above constraint on sampling is not met. This ex-tension is applied to compute the shortest homotopicpath with respect to a set of obstacles. Proof of cor-rectness and computational complexity of the extensionof STON are included. Experimental results show thatthe proposed algorithm works correctly with both sim-ple and non-simple paths in reasonable time as long asthe constraints for correctness are met. STON is used togenerate smooth and shorter homotopic paths, a problemthat can be modeled as the phenomenon of tightening astring. STON is also used as an elastic band for comput- d £ i q - i q + d < d £ i q l p j p Figure 14: Sampling a path uniformly at half the mini-mum distance between the obstacles guarantees at mostone obstacle in any triangle.ing convex hulls. Future research aims at improving thecomputational complexity of the extension of STON andusing it to solve more problems that can be mapped intothe problem of tightening a string or an elastic band.
A Appendices
A.1 A Finite Sampling Theorem
A string in ℜ , wound around point obstacles, can befinitely sampled in such a way so as to guarantee onlyone obstacle within the triangular area spanned by anythree consecutive points on the string. The followingtheorem states the constraint necessary to be imposedon the sampling. Theorem 3.
Sampling a string at half the minimum dis-tance between the obstacles guarantees at most one ob-stacle in any triangle formed by three consecutive pointson the string.Proof.
Let d be the minimum distance between any twounique obstacles in P , P being the set of point obstacles.Let us sample the string such that the distance betweenany two consecutive points on the string is at most d/ d is finite, clearly this leads to a finite sampling ofthe string. The theorem claims, this sampling ensuresthat a triangle formed by any three consecutive pointson the string will never contain more than one uniqueobstacle.For contradiction, let us assume, there exists two ob-stacle points in a triangle formed by three consecutivepoints q i − , q i , q i +1 (see Fig. 14). The segments q i − q i and q i q i +1 included in the string that form two sides ofthe triangle are each of length at most d/
2. Thus themaximum distance between any two points lying withinthe triangle is less than d . But the distance between anytwo obstacle points is at least d . Hence, a contradiction,and the claim follows.9 new p old p Figure 15: Introducing processors anywhere near the ver-tices (shown by circles) of the convex hull does not guar-antee the new path will be homotopic to the old one.
A.2 Procedure for Introducing Proces-sors in Convex Hull
Here we describe the procedure for introducing proces-sors near each vertex of the convex hull in the extensionof STON. The newly introduced processor, say q i , shouldbe placed at a location such that connecting it with theneighboring processors, q i − and q i +1 , does not alter thehomotopy class of the path i.e. the path in which thenew processors are being introduced should remain inthe same homotopy class as the given path. This is nota trivial task as illustrated in Fig. 15, where all the pro-cessors are introduced near the convex hull but the newpath π new is not homotopic to the old path π old .In order for the new path to remain in the same ho-motopy class as the old one, processors cannot be intro-duced within the convex hull and lines joining consecu-tive processors cannot intersect the edges of the convexhull. Theorem 4.
If processors are introduced outside theconvex hull in the regions bounded by the extended ad-jacent edges of the convex hull, then the lines joining theconsecutive processors will not intersect the edges of theconvex hull.Proof.
Let V V V ...V m be a m -sided polygon which isthe convex hull for a set of obstacles under consideration(see Fig. 16). Let q i be a processor in the region formedby extensions of adjacent edges V i − V i and V i +1 V i , ∀ i ,1 ≤ i ≤ m . The claim states that there cannot be anintersection between the line segment q j q j +1 and anyedge of the convex hull.Let us assume, for a contradiction, there exists at leastone intersection between the line segment q j q j +1 and anedge, say V k V k +1 , of the convex polygon V V V ...V m .Then, q j and q j +1 must lie on the opposite sides of theextended line segment V j V j +1 . But by construction, theprocessors q j , q j +1 lie on the same side of the extendedline segment V j V j +1 . Hence a contradiction and theclaim follows. m V V V V m q q q q Figure 16: Introduction of processors outside the convexhull in the regions bounded by the extended adjacentedges of the convex hull guarantees that the lines joiningthe consecutive processors will not intersect the edges ofthe convex hull.
References [1] B. Banerjee. String tightening as a self-organizingphenomenon.
IEEE Trans. Neural Networks ,18(5):1463–1471, 2007.[2] T. Kohonen.
Self-Organizing Maps . Springer,Berlin, 2001.[3] J.A. Kangas, T. Kohonen, and J. Laaksonen. Vari-ants of self-organizing maps.
IEEE Trans. NeuralNetworks , 1:93–99, 1990.[4] B. Fritzke. Growing cell structures - a self-organizing network for unsupervised and supervisedlearning.
Neural Networks , 7(9):1441–1460, 1994.[5] D. Choi and S. Park. Self-creating and organiz-ing neural networks.
IEEE Trans. Neural Networks ,5(4):561–575, 1994.[6] L. Schweizer, G. Parladori, L. Sicuranza, andS. Marsi. A fully neural approach to image com-pression. In T. Kohonen, K. Makisara, O. Simula,and J. Kangas, editors,
Artificial Neural Networks ,pages 815–820, North-Holland, Amsterdam, 1991.[7] K. Obermayer, H. Ritter, and K. Schulten. Large-scale simulations of self-organizing neural networkson parallel computers: application to biologicalmodeling.
Parallel Computing , 14:381–404, 1990.[8] F. Favata and R. Walker. A study of the applicationof Kohonen-type neural networks to the travelingsalesman problem.
Biol. Cybernetics , 64:463–468,1991.[9] H. J. Ritter and T. Kohonen. Self-organizing se-mantic maps.
Biol. Cybernetics , 61:241–254, 1989.1010] H. Kusumoto and Y. Takefuji. O ( log M ) self-organizing map algorithm without learning of neigh-borhood vectors. IEEE Trans. Neural Networks ,17(6):1656–1661, 2006.[11] B. Fritzke. A self-organizing network that can follownon-stationary distributions. In
Proc. Intl. Conf.Artificial Neural Networks , pages 613–618. Springer,1997.[12] J. S. B. Mitchell. Geometric shortest paths and net-work optimization. In J. R. Sack and J. Urrutia, ed-itors,
Handbook on Computational Geometry , pages633–702. Elsevier Science, 2000.[13] H. Choset, K. Lynch, S. Hutchinson, G. Kantor,W. Burgard, L. Kavraki, and S. Thrun.
Principlesof Robot Motion: Theory, Algorithms, and Imple-mentation . MIT Press, Cambridge, 2005.[14] B. Chandrasekaran, J. R. Josephson, B. Banerjee,U. Kurup, and R. Winkler. Diagrammatic reasoningin support of situation understanding and planning.In
Proc. 23rd Army Sci. Conf. , Orlando, FL, 2002.[15] B. Banerjee, B. Chandrasekaran, J. R. Josephson,and R. Winkler. Constructing diagrams to supportsituation understanding and planning: Part 1: Di-agramming group motions. Technical report, OhioState University, Columbus, 2003.[16] B. Chandrasekaran, U. Kurup, B. Banerjee, J. R.Josephson, and R. Winkler. An architecture forproblem solving with diagrams. In A. Blackwell,K. Marriott, and A. Shimojima, editors,
LectureNotes in AI , volume 2980, pages 151–165. Springer-Verlag, 2004.[17] B. Chandrasekaran, U. Kurup, and B. Banerjee. Adiagrammatic reasoning architecture: Design, im-plementation and experiments. In
Proc. AAAISpring Symp., Reasoning with Mental and Exter-nal Diagrams: Computational Modeling and SpatialAssistance , pages 108–113, Stanford University, CA,2005.[18] B. Banerjee. A layered abductive inference frame-work for diagramming group motions.
Special Is-sue of Logic J. IGPL: Abduction, Practical Reason-ing, and Creative Inferences in Sci. , 14(2):363–378,2006.[19] B. Banerjee and B. Chandrasekaran. A frameworkfor planning multiple paths in free space. In
Proc.25th Army Sci. Conf. , Orlando, FL, 2006.[20] B. Banerjee.
Spatial problem solving for diagram-matic reasoning . PhD thesis, Dept. of ComputerScience & Engineering, The Ohio State University,Columbus, 2007. [21] B. Chandrasekaran, B. Banerjee, U. Kurup, J. R.Josephson, and R. Winkler. Diagrammatic rea-soning in army situation understanding and plan-ning: Architecture for decision support and cog-nitive modeling. In P. McDermott and L. Allen-der, editors,
Advanced Decision Architectures forthe Warfighter: Foundations and Technology , chap-ter 21, pages 379–394. 2009.[22] B. Banerjee and B. Chandrasekaran. A spatialsearch framework for executing perceptions and ac-tions in diagrammatic reasoning. In
Lecture Notesin AI , volume 6170, pages 144–159. Springer, Hei-delberg, 2010.[23] B. Banerjee and B. Chandrasekaran. A constraintsatisfaction framework for executing perceptionsand actions in diagrammatic reasoning.
Journal ofArtificial Intelligence Research , 39:373–427, 2010.[24] B. Banerjee and B. Chandrasekaran. A frameworkof voronoi diagram for planning multiple paths infree space.
Journal of Experimental & TheoreticalArtificial Intelligence , 25(4):457–475, 2012.[25] S. Gao, M. Jerrum, M. Kaufmann, K. Mehlhorn,and W. Rlling. On continuous homotopic one layerrouting. In
Proc. Symp. Comput. Geometry , pages392–402, 1988.[26] S. Quinlan and O. Khatib. Elastic bands: con-necting path planning and robot control. In
Proc.IEEE Intl. Conf. Robotics and Automation , vol-ume 2, pages 802–807, Atlanta, GA, 1993.[27] D. Grigoriev and A. Slissenko. Polytime algorithmfor the shortest path in a homotopy class amidstsemi-algebraic obstacles in the plane. In
Proc. Intl.Symp. Symbolic and Algebraic Computation , pages17–24, Rostock, Germany, 1998.[28] F. Rosenblatt. The perceptron: a probabilisticmodel for information storage and organization inthe brain.
Psychological Review , 65(6):386–408,1958.[29] E. Berglund and J. Sitte. The parameterless self-organizing map algorithm.
IEEE Trans. Neural Net-works , 17(2):305–316, 2006.[30] T. De Wolf and T. Holvoet. Emergence versus self-organisation: different concepts but promising whencombined.
Engineering Self Organising Systems:Methodologies and Applications, Lecture Notes inComputer Sci. , 3464:1–15, 2005.[31] M. de Berg, M. van Kreveld, M. Overmars,and O. Schwarzkopf.
Computational Geometry .Springer-Verlag, 1997.1132] S. Pal, A. Datta, and N. R. Pal. A multilayerself-organizing model for convex-hull computation.
IEEE Trans. Neural Networks , 12(6):1341–1347,2001.[33] A. Efrat, S. G. Kobourov, and A. Lubiw. Comput-ing homotopic shortest paths efficiently. In
Proc.10th Annual European Symp. Comput. Geometry ,pages 411–423, 2002.[34] S. Bespamyatnikh. Computing homotopic shortestpaths in the plane.
J. Algorithms , 49(2):284–303,2003.[35] D. Richards. Complexity of single-layer routing.
IEEE Trans. Computers , 33(3):286–288, 1984.[36] M. Kass, A. Witkin, and D. Terzopoulos. Snakesactive contour models.