Mathematical Properties of Generalized Shape Expansion-Based Motion Planning Algorithms
aa r X i v : . [ c s . R O ] F e b Mathematical Properties of Generalized ShapeExpansion-Based Motion Planning Algorithms
Adhvaith Ramkumar , Vrushabh Zinage and Satadal Ghosh Abstract —Motion planning is an essential aspect of au-tonomous systems and robotics and is an active area of research.A recently-proposed sampling-based motion planning algorithm,termed ’Generalized Shape Expansion’ (GSE), has been shownto possess significant improvement in computational time overseveral existing well-established algorithms. The GSE has alsobeen shown to be probabilistically complete. However, asymptoticoptimality of the GSE is yet to be studied. To this end, in thispaper we show that the GSE algorithm is not asymptoticallyoptimal by studying its behaviour for the promenade problem.In order to obtain a probabilistically complete and asymptoticallyoptimal generalized shape-based algorithm, a modified version ofthe GSE, namely ’GSE ⋆ ’ algorithm, is subsequently presented.The forementioned desired mathematical properties of the GSE ⋆ algorithm are justified by its detailed analysis. Numerical simu-lations are found to be in line with the theoretical results on theGSE ⋆ algorithm. Index Terms —motion planning, sampling-based algorithms.
I. I
NTRODUCTION
Unmanned vehicle Systems (UxS) have seen a steadyincrease in attention over the past decade due to rapid ad-vancements in technology and reduced human risk associatedto it. Applications of UxS range from civilian ones likepollutant monitoring to defense-related ones like surveillanceand reconnaissance. Motion planning being a key aspect ofany UxS is an active area of research.An early seminal paper in the field of motion planningutilized artificial potential fields [1], however it was shown tosuffer from the problem of local minima [2]. Numerous onlinemotion planning strategies have been formulated using severalmethods like velocity obstacles [3] and its extensions suchas optimal reciprocal collision avoidance [4], collision coneapproach [5], [6] gradient vector fields [7], pseudospectralmethods [8] and terminal angle-constrained guidance theory[9], [10]. In the presence of prior knowledge about the envi-ronment, offline motion planners such as discrete search-basedmethods [11], [12] and sampling-based methods [13], [14] areemployed. Optimization-based techniques such as sequentialconvex programming [15], [16] and quadratic programmingmethods [17] are also used.Among offline planners, sampling-based algorithms are ahighly favoured approach due to computational advantagesin higher-dimensional spaces. A sampling-based algorithmgenerates a connected graph, containing the initial and goalpoints, entirely inside the free-space. This is accomplished by Adhvaith Ramkumar, Vrushabh Zinage and SatadalGhosh are with the Department of Aerospace Engi-neering, Indian Institute of Technology Madras, India { ae16b018,ae16b017,satadal } @smail.iitm.ac.in obtaining random samples from the free-space and succes-sively adding them to the graph, eventually connecting theinitial and goal points. The optimal path is then found amongthe paths between the initial and goal points. Several sampling-based algorithms have been developed in literature suchas Probabilistic Road Maps (PRM) [13], Rapidly-exploringRandom Trees (RRT)[18] and Fast Marching Tree (FMT ⋆ )[19]. A spherical expansion-based motion planning algorithm,termed ’SE-SCP’ [20] was shown to be computationally moreefficient than PRM ⋆ and RRT ⋆ via numerical studies. In orderto better leverage obstacle-space information, an algorithmtermed ’Generalized Shape Expansion’ (GSE) algorithm, re-lying on a more generic shape expansion has recently beenpresented in [21] for motion planning in 2-D obstacle-clutteredenvironments. Its extension to 3-D environment was alsopresented in [22], [23]. The GSE algorithm leveraged a novelgeneralized shape for the expansion of the generated graph,which helped in exploring the free-space in a fast yet moreextensive manner. Through extensive numerical simulationstudies, the performance of the GSE in terms of computa-tion time was found to be significantly better than severalother well-established sampling-based methods. An even fasterplanner has been presented in [24] by embedding directionalsampling scheme to the GSE for 2-D environments. Whilesampling-based motion planning algorithms cannot providedeterministic guarantees, two probabilistic criteria are usedto evaluate the utility of these algorithms. These proper-ties are probabilistic completeness and asymptotic optimality.Probabilistic completeness of an algorithm guarantees thatthe probability of failure of an algorithm decays to zero,as the number of samples goes to infinity, while asymptoticoptimality of an algorithm guarantees that the algorithm ishighly likely to find the optimal path as the number of samplesgoes to infinity. Algorithms like PRM and RRT were foundto be probabilistically complete [25], [18], but they are notasymptotically optimal [14]. In order to ensure asymptoticoptimality, their modified versions such as RRT ⋆ and PRM ⋆ were developed and analyzed in [14].This paper is evolved from [26], where the probabilisticcompleteness property of the GSE algorithm was established.However asymptotic optimality of the GSE algorithm has notyet been studied so far. To this end, in this paper, we considerthe promenade problem, for which following the methodologyof [27] we find that the GSE has a non-zero probability ofreturning low-quality solutions for the promenade problem.This establishes the lack of asymptotic optimality of the GSEalgorithm. Subsequently, we present a modification of the GSEalgorithm, termed as the GSE ⋆ algorithm, in order to ensure asymptotic optimality as well as probabilistic completeness.Besides the GSE expansion, the GSE ⋆ algorithm adds addi-tional vertices and edges based on a decreasing connectionradii criterion to the graph generated by the GSE, thus limitingthe path cost while further exploring the environment. Thebasic GSE graph, a subset of the GSE ⋆ graph, also helpsin retaining computational advantage. A detailed analysis iscarried out of the GSE ⋆ algorithm for theoretical verificationof both probabilistic completeness and asymptotic optimality.The paper is organized as follows. The problem descriptionalong with a brief description of the probabilistic completenessand asymptotic optimality of a path planning problem is givenin Section II. We provide the d -dimensional GSE algorithm inSection III-A. This is followed by the study of mathematicalproperties of the GSE algorithm in Section III-B and III-C.Next, we present the GSE ⋆ algorithm in Section IV-A fol-lowed by an analysis of the same in Section IV-B and IV-C.Simulation studies are presented to compare GSE and GSE ⋆ in Section V, followed by the conclusion in Section VI.II. P ROBLEM D ESCRIPTION
In this section, we introduce the general path planning prob-lem along with the probabilistic completeness and asymptoticoptimality of the path planning problem. Finally the problemstatement is introduced which we address in this paper.
A. Path Planning Problem
Consider a compact configuration space X = [0 , d , where d is the dimension of the workspace. Let the obstacles in theconfiguration space be denoted as X obs . The free-space in X isthen given by X free = X \ X obs . We assume that the free-spaceis open and the obstacle space is closed. The Start point X init is an element of X free , and the Goal point X goal ∈ X free .Given a function σ : [0 , → X , the total variation of σ isdefined as follows, TV( σ ) = sup { n ∈ N , b < ··· c ( σ ) if TV( σ ) > TV( σ ) .Further note that a path σ : [0 , → X is said to have • strong δ -clearance if for each s ∈ [0 , , B δ ( σ ( s )) ⊆ X free where B r ( P ) = { x | k x − P k < r } . • weak δ -clearance if there exists a homotopy, i.e. a con-tinuous function ψ : [0 , → S with ψ (0) = σ and ψ (1) = σ where σ has strong δ -clearance. Further if α ∈ [0 , then the path ψ ( α ) has strong δ α -clearance forsome δ α > .We also assume the following: • There exists a feasible path σ between the start and goalpoints with strong ǫ -clearance for some ǫ > . • There exists a path σ ⋆ with weak δ -clearance for some δ > and optimal cost c ⋆ , such that if there exists asequence ( σ n ) ∞ n =1 with lim n →∞ σ n = σ ⋆ , then lim n →∞ c ( σ n ) = c ⋆ .A path planning problem entails finding a feasible path in anenvironment characterized by the triplet ( X free , X init , X goal ) . B. Probabilistic Completeness
Sampling-based motion planning algorithms rely on uni-form random sampling from the free-space in each step.Consequently, it is difficult to provide deterministic guaranteeson the success of the algorithm. We can however study theclaim that success in finding a feasible path from X init to X goal is guaranteed as the number of iterations of the algorithmincreases to infinity. This constitutes the fundamental idea ofprobabilistic completeness of a algorithm.We define a few preliminary quantities • We denote the graph generated by an algorithm after n steps G n = ( V n , E n ) , where V n and E n are the vertexand edge sets, respectively. • Let P n be an indicator random variable that takes valueone indicating the event that there is a connected pathbetween X init and X goal in G n .The algorithm can be said to possess the probabilistic com-pleteness property if lim sup n →∞ P ( P n = 0) = 0 (2)where, P denotes probability of an event. C. Asymptotic Optimality
Let Y n denote the cost of the least cost solution returned bya sampling-based motion planning algorithm in n iterations.We define c ⋆ = inf { c ( σ ) : σ is a feasible path } . An algorithmis said to be asymptotically optimal if P ( { lim sup n →∞ Y n = c ⋆ } ) = 1 (3) D. Problem Statement
In this paper, first the GSE algorithm, which was proposedfor 2-D and 3-D environments in [21], [22] respectively, isextended to a general d dimensional configuration space. Then,probabilistic completeness of the GSE algorithm is to bestudied first followed by an analysis on cost of paths returnedby the GSE algorithm.If the GSE algorithm is not found to be asymptoticallyoptimal, then an amended version of the GSE that wouldsatisfy both the probabilistic completeness and asymptoticoptimality is to be presented and analyzed. r θ (a) r ,X and θ ,X r θ (b) r ,X and θ ,X r θ (cid:2)(cid:3)(cid:4) (c) r ,X and θ ,X r θ (cid:7)(cid:8)(cid:9) (d) r ,X and θ ,X Fig. 1: Depiction of θ i,X and r i,X for all i ∈ { , , , } for a 2-D environmentIII. G ENERALIZED S HAPE E XPANSION (GSE)A
LGORITHM
Let there be m obstacles in configuration space X . Weassume that the points on the obstacle boundary in X areknown. Let the set of points on the i th obstacle be denoted by Q i , where i ∈ { , . . . m } . Consider a point X ∈ X ⊂ R d . Theminimum distance vector of X from the i th obstacle is denotedby r i,X , and its magnitude is given by k r i,X k . Without loss ofgenerality, obstacles are numbered such that k r i,X k ≤ k r j,X k for i < j . Let θ i,X denotes the maximum angle made by thevector r i,X with ( X − x ) , where x ∈ Q i . In other words, θ i,X = argmax x ∈ Q i cos − (cid:18) ( X − x ) . r i,X k X − x k k r i,X k (cid:19) i ∈ { , . . . , m } (4)where a . b denotes the dot product of vectors a and b .Note that the formulation of the maximum angular spectrumcovering i th obstacle as seen from a point X given abovein Eq. (4) is a generalization of the formulation presented in[22]. A sample depiction of θ i,X and r i,X for four obstaclesin two-dimensional configuration space is shown in Fig. 1. A. Description of the GSE Algorithm1) Initialization:
During initialization, the start and goalpositions X init and X goal , respectively, are added to the vertexset V with their corresponding parameters { r i,X , θ i,X } mi =1 .Values of these parameters are then used to compute the corre-sponding shape functions S init ( P ) and S goal ( P ) , respectively,using the function Shape ( P , X , X obs ) for X = X init and X goal , respectively (Lines 2, 3 of Algorithm 1). The edge set E is set empty. This is shown in Lines 1-3 of Algorithm 1.
2) Sample point:
A random point X rand ∈ X free , is drawnfrom a uniform distribution such that the sequence of sampledpoints is independent and identically distributed as shown inLine 5 of Algorithm 1.
3) Nearest point:
For a given X rand , the next step is to finda point X nearest ∈ V , which is nearest to X rand (see Line 7 ofAlgorithm 1). Nearest ( G = ( V , E ) , X rand ) = argmin X ∈ V k X − X rand k (5) Algorithm 1
GSE Algorithm V ← { X init , X goal } , E ← ∅ S init ( P ) ← Shape ( P , X init , X obs ) S goal ( P ) ← Shape ( P , X goal , X obs ) for j = 1 . . . n do X rand ← GenerateSample S rand ( P ) ← Shape ( P , X rand , X obs ) X nearest ← Nearest ( G = ( V , E ) , X rand ) S nearest ( P ) ← Shape ( P , X nearest , X obs ) X new ← Steer-GSE ( X nearest , X rand ) S new ( P ) ← GSE-Shape ( P , X new , X obs ) X intersect ← IntersectShape ( G, X new ) V ← V ∪ { X new , X rand } for X n ∈ X intersect do E ← E ∪ { ( X n , X new ) } end for G = ( V , E ) P ath init,goal,shortest ← MinPath ( G, X init , X goal ) end forAlgorithm 2 GSE Shape Function ( Shape ( P , X , X obs )) ℓ = 0 for i = 1 , , . . . , m do ℓ i ← sat ( cos − ( r i,X . ( P − X ) k r i,X k k P − X k ) − θ i,X ) f i ← sat ( θ i,X − cos − ( r i,X . ( P − X ) k r i,X k k P − X k )) + sat ( r i,X − k P − X k ) g i ← f i + P i − j =1 ℓ j − ( i + 1) end for if all ( ℓ i ) = 1 then Shape ( P , X , X obs ) = 0 else S X ( P ) = Shape ( P , X , X obs ) ← g ∗ g ∗ . . . g m end if
4) Generalized shape generation:
The generalized shape S X generated about any point X is given by the Shape function, which is detailed in Algorithm 2. The function
Shape ( P , X , X obs ) in Algorithm 2 returns a value zero if atest point P is within the generalized shape about the sampled X (a) R , X and R , X X (b) \ R , X , \ R , X and I X (c) ^ R , X X (d) S X = R , X ∪ ( R , X − ^ R , X ) ∪ I Fig. 2: An illustration of generalized shape about X in 2-D environmentpoint X , and non-zero otherwise, that is, S X ( P ) (cid:26) = 0; if P ∈ S X = 0 , otherwise (6)Note that the sat ( . ) function in Algorithm 2 is given by sat ( x ) = 1 if x ≥ else .
5) Steering X rand to generate X new : ( Steer-GSE ( X nearest , X rand ) ):The randomly selected point X rand is steered to a suitablepoint X new as follows: • If X rand ∈ S X nearest , then X new is equal to X rand . • Otherwise, X new is the point of intersection of theboundary of S X nearest and the line joining X nearest and X rand .The point X new thus computed is then added to the set V .
6) Generation of new edges:
In this step, first a set X intersect is defined as X intersect , { X ∈ V |S X ∩ S X new ∩ XX new = ∅} .Next, X new is connected with the vertices X n ∈ X near vianewly generated edges X n X new , which are then added to theset E .
7) Generating shortest path from X init to X goal : Once adirected graph G = ( V , E ) connecting X init and X goal isfound, using Dijkstra’s algorithm, the minimum cost connectedpath is found from X init to X goal . B. Probabilistic Completeness of the GSE Algorithm
We denote the vertex set and edge set of GSE after n iterations by V GSE n and E GSE n , respectively. We indicate asummary of the proof of probabilistic completeness of theGSE algorithm. Further details are provided in [26]. We showthis fact by establishing a few properties of the GSE visibilityfunction f : 2 X free → X free defined as f ( S ) , { X ∈ X free | S X ∩ S = ∅} (7)For the analysis of GSE, we study the iterates of this func-tion f (2) ( S ) , f ( f ( S )) , f (3) ( S ) = f (2) ( f ( S )) etc. Moreprecisely, we establish that • For any path planning problem (conforming to the con-ditions laid down in our problem statement), there existsa finite constant q > such that f ( q ) ( X goal ) = X free . • Let b V GSE n denote the component of the GSE graph thatcontains the initial point X init . Then, the algorithm suc-ceeds in finding a path from X init to X goal in n iterationsif b V GSE n ∩ X free = ∅ . • The probability that the vertex set of GSE fails to progressfrom the set f ( q ) ( X goal ) to the set f ( X goal ) is boundedabove by the tail probability of a Bernoulli randomvariable. • Using Chernoff bounds, we show that the probability offailure of the GSE algorithm decays exponentially withthe number of iterations of the algorithm.Following this argument, we conclude that the GSE algorithmis probabilistically complete.
C. Analysis of Cost of Path Returned by the GSE Algorithm
In this section, we analyse the cost of the paths returnedby the GSE algorithm for investigating whether the GSEhas asymptotic optimality property. In particular, we studyperformance of the algorithm in the promenade problem, andconduct the analysis by defining a finite state machine calledthe Automaton of Sampling Diagrams (ASD) similar to theone defined in [28]. However, in this section, the automatonis modified to better suit the analysis of the GSE algorithm.
1) The promenade problem and the automaton of samplingdiagrams for the GSE algorithm:
The promenade problemdescribed below offers some perspective on how the GSEalgorithm performs for certain problem types. The automatonwe define provides a way to study the critical events in theprogress of the GSE algorithm. This helps determine theprobability of achieving solutions of different qualities. Forease of understanding, here, the analysis is given for 2-Denvironments. However, it can easily be extended to moregeneral settings of higher dimensional environments.
2) Promenade problem:
The setting of this problem is asfollows: Define X = [0 , α + 2] and X obs = [1 , α + 1] . Asusual we denote an arbitrary point by coordinates ( x, y ) . Theinitial and goal points are set to be on the left and right of thecentral obstacle. While the exact location need not be precise,we set X init = (1 − ǫ, ǫ ) and X goal = ( α + 1 + ǫ, ǫ ) where ǫ > is much smaller than min ( α, . The aim is togenerate a path between these two points. In this environmentwe define the following regions • L = { ( x, y ) ∈ X | x + y ≤ } • L is the reflection of L about the line x = α/ • B = [0 , × [ α + 1 , α + 2] • B is the reflection of B about the line x = α/ A representative figure is given in Fig. 3 We further classifyall solutions to the promenade problem into two types -
Obstacle B B L L X init X goal Fig. 3: A schematic representation of the promenade problem • Type-B solutions where the path returned by the algo-rithm passes through B and B , that is, passes throughthe region above the square obstacle. • Type-L solutions where the path returned passes through L and L , that is, passes through the region below thesquare obstacle.It is immediately evident that Type-B solutions are of highercost under the given conditions for the stated locations of theinitial and goal points.
3) Automaton of sampling diagrams:
In the context ofthe problem above, we develop an automaton to model theprogress of the GSE algorithm. Type-L and Type-B solutionsare the two major types of solutions we wish to develop.Therefore, we consider two states s accepting and s rejecting , asso-ciated with Type-L and Type-B solutions respectively. Thesestates are by necessity ’sink’ states. Notice that the automatonbegins with the state s init . We are concerned primarily with theprogress of the component of the GSE graph containing theinitial point. Consequently, in order to better study the GSEgraph, we define two interim states s and s . Progress from s init to s accepting through the two interim states gives us a highcost solution, and we ’reject’ all paths that might lead to lowcost solutions via the rejecting state. Developing this heuristic,we have the following definition Definition 2.
An automaton of sampling diagrams isa finite state machine with five states.The states are s init , s , s , s accepting , s rejecting . The machine takes as input apoint X sampled from the free-space. Based on the locationof the point, decisions are made as to the progress of theautomaton. Two sets F i and R i are affiliated with each state s i , where i ∈ { , , init , rejecting , accepting } . The states areordered s init → s → s → s accepting . The state s rejecting isoutside of this structure. The operations of the automaton arethen described as below, • If the automaton reaches a rejecting or an accepting state,then it does not progress further. • Let the automaton be in a regular (neither accepting norrejecting) state s i . Suppose a point ξ is sampled. If thenext vertex added to the graph generated by the GSEalgorithm using the point ξ lies in R i then the automatonmoves to the rejecting state. Obstacle
Ri F1 (cid:10) (cid:11)(cid:12)(cid:13)(cid:14)(cid:15) X (cid:16)(cid:17)(cid:18)(cid:19) X goal Fig. 4: An illustration of the regions F i and R i • If the next vertex added to the graph lies in the region F i , then the automaton progresses to the next state. • If the vertex added to the graph of GSE is in neither F i nor in R i , then the automaton stays in the same state.By suitably defining the regions R i and F i , we can classifythe solutions returned by the GSE algorithm. The acceptingstate is associated with solutions that are of high cost. Thismotivates us to define these regions as: • For all states R i = L . • F init is a γ × γ square, with γ a very small postive number.This square shares its upper right corner with B and ishomothetic to it. • F ⊂ [1 , α + 1] × [ α + 1 , α + 2] is a sufficiently smallregion that is parallel to F init . We require that F liesentirely within the shape of every point of F . • F is the mirror image of F init about the line x = α/ A sample illustration of the regions F i (the forward regions)and R i is provided in in Fig. 4.
4) Low-quality solutions in the GSE algorithm:
We analysethe solutions that the GSE algorithm returns for arbitrarilyhigh number of iterations. This then allows us to concludewhether the GSE algorithm is asymptotically optimal or not.The reasoning for this is provided below. Note that Y n isdefined as in Section II-CThe proof below proceeds in two parts. Firstly, we prove thatthe automaton provides a schematic for the GSE algorithm insome sense. More precisely, it reaches certain states only whenthe GSE algorithm returns certain solutions. Paths returned bythe GSE, passing through the regions B i are not optimal, weassociate these paths with the accepting state of the automaton,thereby if we show that the probability that the automatonreaches the accepting state is bounded below, then we have thedesired result. Before we prove this, we define a few quantities. Definition 3.
Associated with each (non-rejecting) state s ofthe automaton, we define two regions H + ( s ) and H − ( s ) asfollows. • H − ( s ) = L for all non-rejecting states s • H + ( s init ) = ∅ • H + ( s ) = F init • H + ( s ) = F • H + ( s accepting ) = F We denote by E i the swath of GSE, that is, the union ofline segments that constitute its edges after i iterations. Lemma For all α ≥ , for any X init in between L and B in the free space, and X goal between L and B , Theautomaton described in Section III-C3 moves to • A rejecting state on all inputs(samples) if GSE returns aType-L solution • An accepting state only if GSE returns a Type-B solution
Proof.
Consider any two arbitrary points X , X ∈ L c . Then,the line segment joining them does not pass through L (with L as in Fig. 3) Consequently, in order to have a Type-L path,we must have a vertex of GSE in L . But this moves theautomaton to a rejecting state, by definition. The first part ofthe lemma is proved. Now, for the second part of the lemma,consider the following proposition. Proposition Let a sequence of vertices ξ ξ . . . ξ n andswath E n be generated by the GSE algorithm and read bythe automaton. For each non-rejecting state s , if H + ( s ) = ∅ ,then E n ∩ H + ( s ) = ∅ .Proof. We prove this claim by induction on n .For n = 1 , if ξ ∈ F init then the automaton progresses tostate s . Since H + ( s ) = F init , the claim is true.Next, we assume that the claim holds for some n ∈ N and the automaton is now in state s i . Therefore, H + ( s i ) ∩E n = ∅ . By statement of the proposition, s i is non-rejecting.The automaton reads the vertex ξ n +1 . In the event that theautomaton does not progress, H + ( s ) remains fixed, referringto the induction hypothesis, the claim holds trivially. If thenext added vertex lies in F i , the automaton moves forward tothe next state s j , by Def. 2. Since s j = s init , H + ( s ) is thesame as F i (referring to Def. 3). we have H + ( s j ) ∩ E n +1 = ∅ .This proves the claim in Proposition 1.Lastly, we conclude that if the automaton reaches theaccepting state on an input ξ ξ . . . ξ n , then we can concludethat a vertex of the GSE graph lies in F . By suitably definingthe region F , we ensure that X goal (the goal point) is in theshape of points in F , and vice versa. Then the GSE algorithmwill end on the ( n + 1) th iteration. Further, the swath of GSEwill intersect B , B , and cannot intersect L . Consequently,the type of solution will be Type-B, a high cost sub-optimalsolution. This proves the second part of lemma.The next lemma shows that the Automaton of SamplingDiagrams achieves an accepting state with a probability thatis bounded below, for an arbitrarily high number of iterations. Lemma If the sets F i in the definition of the automaton havea positive volume, then there exists a constant π > , and anatural number N such that for all n ≥ N , P ( Accept ) ≥ π ,where, ’Accept’ is the event that the automaton enters theaccepting state after reading n samples. Proof.
We begin by assuming the automaton is in a state s j ,where j / ∈ { accepting , rejecting } . We define • an event F ( s j ) as the event that the next sample is in aregion which moves the automaton forward. • an event R ( s j ) as the event that the next sample is in aregion which moves the automaton to a rejecting state.We call these events as ’critical events’. The probability thatthe automaton moves forward (denoted π ( s j ) ), given theoccurrence of a critical event can be given by π ( s j ) , P ( F ( s j ) | F ( s j ) ∪ R ( s j )) = µ ( F j ) µ ( F j ) + µ ( R j ) (8)Here, µ is the Lebesgue measure. The above equation allowsus to draw two conclusions. Firstly, given a sufficiently largeinput size, the automaton is guaranteed to enter either anaccepting or a rejecting state. The set of states forms a finitestate Markov Chain with s accepting and s rejecting as ’trap’ states,and hence for arbitrarily large input sizes, the automatonis guaranteed to visit one of these. Consequently we canconclude that there exists a minimal N such that for all n > N after reading ξ , ξ , . . . ξ n . P ( reached accepting or rejecting state ) ≥ − δ (9)Secondly, the probability of progressing from one state to thenext is non-zero. Let the probability of progressing from state s j to the next be bounded below by some p j > µ (+ F j ) µ ( X free ) , byvirtue of the geometry of the problem. Denote π = Q j p j .We then have that P ( Accept ) ≥ (1 − δ )( π ) = π (10)Thus, the lemma follows.Thus, we have shown that the Automaton of SamplingDiagrams moves to an accepting state, with a probability thatis bounded below, for an arbitrarily high number of iterations.Further by Lemma 1, we know that the automaton reaches anaccepting state only if a Type-B solution is returned by theGSE algorithm. Hence the probability of returning a Type-Bsolution as defined in Section III-C2 is bounded below by apositive constant. This leads us to the following result thatshows the GSE algorithm is not asymptotically optimal. Theorem With Y n , c ⋆ defined as in Section II-C, P ( { lim sup n →∞ Y n = c ⋆ } ) = 0 (11) Proof.
Since Y n > c ⋆ for all n > N with positive probability,then lim sup n →∞ Y n > c ⋆ with positive probability, which thenmeans P ( { lim sup n →∞ Y n = c ⋆ } ) < (12)Probabilistic completeness property of the GSE algorithmguarantees that the algorithm returns a feasible path of finitecost. Therefore, invoking Lemma 25 of [14], we have thefollowing. Lemma [14] The event { lim sup Y n = c ⋆ } is a tail event,and hence by Kolmogorov’s zero-one law, it has probabilityeither 0 or 1.Lemma 3 along with Eq. (12) verify that Eq. (11) holds.This proves the theorem.From the above, we conclude that while the GSE is proba-bilistically complete, it does not possess the desired asymptoticoptimality property. IV. GSE ⋆ A LGORITHM
A. Revised Algorithm
In this section, a modified version of the GSE algorithm ispresented such that the resulting algorithm possesses asymp-totic optimality property, while it also retains the advantagesof the GSE algorithm. To this end, the GSE ⋆ algorithm isformulated, which along with its detailed analysis (in SectionsIV-B and IV-C) forms one of the key contributions of thispaper. The GSE ⋆ in essence contains all the major steps usedin the iterations of the GSE algorithm. In addition, a few moreroutines are added given in Line 10,13 and in Lines 17-23 inAlgorithm 3.Before describing the revised algorithm, we introduce afew functions/notations used in the algorithm. First, the steerfunction ( Steer ( X , Y , η ) ) in Line 10 of Algorithm 3 returnsa point z ∈ X free ∩B η ( X ) , which is closest to the point Y , thatis, it returns a point lying in B η ( X ) and on the line joining X and Y . More precisely, Steer ( X , Y , η ) is given as, Steer ( X , Y , η ) = argmin z ∈B η ( X ) ∩ X free k z − Y k (13)Here, the parameter η is arbitrary, and can be suited to theproblem at hand.Let | V | denote the number of vertices of the graph generatedby the GSE algorithm. Following Lines 9 and 10 of Algorithm3 at each iteration of the algorithm two vertices are added. Line9 adds a vertex following the GSE algorithm, while Line 10introduces another vertex following the steering procedure inEq. (13).We then define r n , min ( γ GSE ⋆ ( log ( | V | ) | V | )) d , η ) . This choiceof connection radius r n is motivated to ensure asymp-totic optimality. In the proof of optimality, relevant boundswill be established for the constant γ GSE ⋆ . The function( Near ( V , X , r n ) ) returns a subset of the set of vertices ofthe graph generated by the GSE ⋆ algorithm. These consist ofthose vertices which lie within the ball B r n ( X ) . That is, Near ( V , X , r n ) = V ∩ B r n ( X ) (14)Thus, the additional routines in the GSE ⋆ algorithm overthe earlier GSE algorithm are stated below. In Line 10 ofAlgorithm 3, a new vertex is added using the steering functionEq. (13). Subsequently, we use the Near function, withconnection radius as defined above, to establish a set ofcandidate vertices for connection in Line 18. Lines 19-22 usethe
Shape function to check for viability of collision-freeconnection with each vertex and add edges to the edge set.Note that using the
Shape function is a stronger criterion thancollision-checking, and is a key feature of the GSE ⋆ algorithmsimilar to the GSE algorithm. B. Probabilistic Completeness of the GSE ⋆ Algorithm
We denote the vertex set and edge set of GSE ⋆ after n iterations by V GSE ⋆ n and E GSE ⋆ n , respectively. The graph of theGSE ⋆ algorithm is denoted G GSE ⋆ n . As discussed in SectionIV-A, the GSE ⋆ algorithm adds vertices and edges to thegraph generated by the GSE algorithm. So, at any iteration, V GSE n ⊂ V GSE ⋆ n and E GSE n ⊂ E GSE ⋆ n . We note that the GSE algorithm is probabilistically complete, as described in SectionIII-B (details available in [26]). Since the graph generated bythe GSE algorithm is a subgraph of the graph generated bythe GSE ⋆ algorithm, the GSE ⋆ algorithm is probabilisticallycomplete. Algorithm 3
GSE ⋆ Algorithm V ← { X init , X goal } , E ← ∅ S init ( P ) ← Shape ( P , X init , X obs ) S goal ( P ) ← Shape ( P , X goal , X obs ) for j = 1 . . . n do X rand ← GenerateSample S rand ( P ) ← Shape ( P , X rand , X obs ) X nearest ← Nearest ( G = ( V , E ) , X rand ) S nearest ( P ) ← Shape ( P , X nearest , X obs ) X new,g ← Steer-GSE ( X nearest , X rand ) X new ← Steer ( X nearest , X rand , η ) S new,g ( P ) ← Shape ( P , X new,g , X obs ) X intersect ← IntersectShape ( G, X new,g ) V ← V ∪ { X new,g , X new } for X n ∈ X intersect do E ← E ∪ { ( X n , X new,g ) } end for b ← γ GSE ⋆ (log( | V | ) / | V | ) /d U ← Near ( V , X new , min ( b, η )) for u ∈ U do if Shape ( u , X new , X obs ) = 0 then E ← E ∪ { ( u , X new ) } end if end for G = ( V , E ) P ath init,goal,shortest ← MinPath ( G, X init , X goal ) end for C. Asymptotic Optimality of the GSE ⋆ Algorithm
We now turn to proving that the GSE ⋆ algorithm is asymp-totically optimal. Since the GSE ⋆ algorithm is quite similar inspirit to the RRG algorithm, we can reasonably expect this tobe the case. In the following proof, we carefully follow theargument provided in [14] for the RRG algorithm. However,the selection of the parameter γ GSE ⋆ has been altered from γ GSE to better suit for the analysis of the GSE ⋆ algorithm.Let the optimal path between the initial and goal points be σ ⋆ . This path is assumed to have weak δ -clearance. We assumethat µ ( σ ⋆ ) = 0 , where µ is the Lebesgue measure on X free .The following lemma, while substantially similar to Lemma50 of [14], is included for completeness. Further, it serves tointroduce several key parameters. Lemma For a path σ ⋆ with optimal cost and weak δ -clearance ( δ > ), there exists a sequence of paths ( σ n ) ∞ n =1 such that the following holds • Each path σ n has strong δ n -clearance, with < δ n ≤ δ and lim n →∞ δ n = 0 • lim n →∞ σ n = σ ⋆ σ n σ n ' B n,1 B n,i β Fig. 5: Ilustration of the paths σ n and σ ′ n along with the setof covering balls B ( σ n , q n , l n ) Proof.
Let us define δ n , min ( δ, (1+ φ ) r n φ ) , for some boundedpositive constant φ . Clearly, the sequence δ n (with ≤ δ n ≤ δ ) is non-increasing and lim n →∞ δ n = 0 (from the definitions of δ n and r n ). Consider the n th element of the sequence ( Y n ) ∞ n =1 defined as Y n = { x ∈ X free | B δ n ( x ) ⊆ Y free } .Now, we consider a homotopy ψ where ψ (0) = σ is apath with strong δ -clearance. Further, ψ (1) = σ ⋆ . We can thendefine σ n = ψ ( α n ) , where α n = max [0 , { t | ψ ( t ) ⊆ Y n } , thatis, σ n is the path with strong δ n -clearance that is ’closest’ to σ ⋆ . Since δ n is a non-increasing sequence, by construction, Y ⊆ Y ⊆ · · · ⊆ X free . And, S ∞ n =1 Y n = X free . Thisestablishes that the limit lim n →∞ α n exists. Let this limit be a .Then, lim n →∞ ψ ( α n ) = ψ ( a ) . Further, the clearance of ψ ( a ) isgiven by lim n →∞ δ n . By the construction of the homotopy ψ , theclearance of ψ ( α ) is zero, only if α = 1 . Since lim n →∞ δ n = 0 , a = 1 . Thus, lim n →∞ α n = 1 . Consequently, lim n →∞ σ n = σ ⋆ We have proved that there exists a sequence of paths ( σ n ) n = ∞ n =1 such that each path σ n has strong δ n -clearance, andlim n →∞ δ n = 0 , and lim n →∞ σ n = σ ⋆ . Next, we show that theprobability that there exists a path approximating σ n in G GSE ⋆ n tends to 1 as n → ∞ . This would ensure that the algorithm isasymptotically optimal. The following definition provides uswith a framework toward generating the paths in GSE ⋆ thatapproximate σ n . Definition 4.
We define the sets B ( σ n , q n , l n ) , { B n, , B n, , B n, , . . . B n,M n } (15)Where each element B n,m is a ball of radius q n = δ n φ , l n = φq n and • The center of B n, is σ n (0) = σ (0) . • The center of B n,k is σ n ( b k ) , where b k , min { b ∈ [ b k − , | | σ ( b ) − σ ( b k − ) | ≥ φl n } for k ∈ { , , . . . M n } • The centre of the last ball must be the point σ n (1) , thatis, b M n = 1 .A representation of the set B ( σ n , q n , l n ) is shown in Fig.5. We wish to show that each ball B n,m contains a vertex ofGSE ⋆ with higher probability as n increases. This then allowsus to consider paths in the graph of GSE ⋆ that can approximate χ χ χ χ χ i Fig. 6: An illustration of X free and the corresponding exhaus-tive partitions X , X , . . . , X M . The black regions representthe obstacles.paths in the sequence ( σ n ) ∞ n =1 . To prove this result, we firstshow that any point in the free-space is ’close’ to a vertex ofGSE ⋆ after n iterations with a probability that increases with n . Lemma Let C n at any iteration n denote the event thatfor any x ∈ X free , there exists a vertex v ∈ V GSE ⋆ n , such that k x − v k ≤ η . Then P ( C cn ) ≤ ae − bn for some positive η, a, b . Proof.
Define the diameter of a set S by d ( S ) for a set S ⊂ R as follows: d ( S ) = sup x,y ∈ S k x − y k (16)We partition the set X free into a collection of sets {X i } for i ∈ { , , . . . , M } , with M finite, such that d ( X i ) < η forall i ∈ { , , . . . , M } . An illustration is provided in Fig. 6.Define the indicator variable C n,i , ( if V GSE ⋆ n ∩ X i = ∅ otherwise (17)We can conclude that C cn ⊆ S Mi =1 C cn,i . Consequently P ( C cn ) ≤ M X i =1 P ( C cn,i ) (18)We now wish to show that P ( C cn,i ) decays exponentially withincreasing n .To this end, define the function g : 2 X free → X free as g ( S ) = { y ∈ X free | B η ( x ) ∩ S = ∅} (19)The iterates of g are denoted g (2) ( S ) = g ( g ( S )) , g (3) ( S ) = g (2) ( g ( S )) etc. For convenience, we denote g ( { x } ) (where x ∈ X free ) by g ( x ) . Notice that µ ( g ( t ) ( x )) is an increasingfunction of t . We conclude that there exists an iterate of g suchthat X free ⊆ g ( t ) ( x ) , for any given point x ∈ X free . Consider apoint x ∈ X free . We wish to show that P ( g ( x ) ∩ V GSE ⋆ n = ∅ ) ≤ a x e − b x n (20) Let i k , min i ∈ N ( V k ∩ g ( i ) ( x ) = ∅ ) . Further D k , ( if i k +1 ≥ i k otherwise (21)And let D = P ni =1 D i . Since passing to successive iteratesof the function g at least t times, ensures that there will be avertex of GSE ⋆ in the set g ( x ) , P ( there is no vertex in B η ( x )) ≤ P ( D < t ) .We claim that for k th iteration of GSE, P ( D k = 1) isbounded below by a positive fraction p < for all k where i k > .At the k th iteration of GSE ⋆ , let there be w ( ≤ k ) vertices ofthe GSE ⋆ generated graph belonging to g ( i k ) ( x ) . We need toestablish that for any location of these w vertices in g ( i k ) ( x ) ,there is a minimum probability of extending the graph to apoint belonging to f ( i k − ( x ) .It is evident that the probability of D k = 1 increases as w increases (assuming other k − w vertices are kept fixed),because the number of candidate vertices for extension of theGSE ⋆ generated graph increases. Hence, it suffices to considerthe case, wherein w = 1 . Let this vertex be denoted by Z .Let Vor ( Z ) denote the Voronoi cell of the vertex Z in theVoronoi diagram consisting of the k vertices of the GSE-generated graph so far. Because of the finite number ( k ) of vertices in the GSE-generated graph, for the only vertex Z ∈ g ( i k ) ( x ) , it follows that µ ( Vor ( Z )) is bounded below bysome number, say e i k . Now, for any arbitrary location of Z inthe set g ( i k ) ( x ) , there is a subset of Vor ( Z ) , in which, whenthe new vertex is sampled, leads to extension of the GSE-generated graph to a point in g ( i k − ( x ) . Let the minimum ofthis fraction over the set g ( i k ) ( x ) be denoted as g i k . We canthen conclude that for p = min i k ≤ m e ik g ik µ ( X free ) , the statement holdstrue.Let us next consider a random variable E = P ti =1 E i ,where the E i are i.i.d Bernoulli random variables ( P ( D i =1) = p ). We know that P ( D i = 1) > P ( D i = 1) = p . Con-sequently, the probability distribution of the random variable C provides a worst-case (lower) bound on P ( D > t ) , that is, P ( D > t ) > P ( E > t ) . Taking complement on both sideswe have P ( D < t ) < P ( E < t ) . We can then use Chernoffbounds on the random variable E , so for some k, l > : P ( D < t ) ≤ P ( C < t ) ≤ e − t e − ln + k/n (22)For sufficiently large n , P ( D < t ) ≤ a x e − b x n . Since P ( V GSE ⋆ n ∩ g ( x ) = ∅ ) ≤ P ( D < t ) , we have P ( V n ∩ g ( x ) = ∅ ) ≤ a x e − b x n .Since we can select a suitable point in any given set X i and apply the argument above, we can conclude that,therefore(from Eq. (17)) P ( C cn,i ) ≤ a i e − b i n for some positive a i , b i independent of n . Therefore, we can conclude that since thesum of finitely many exponentially decreasing functions isbounded above by an exponentially decreasing function, fromEq. (18) one can conclude that P ( C cn ) ≤ ae − bn .Let ρ ∈ (0 , be a variable independent of n . The event ( T ni = ⌊ ρn ⌋ C i ) indicates that between the ⌊ ρn ⌋ th iteration andthe n th iteration of the GSE ⋆ algorithm, it is not necessary to steer any new sampled vertices using the Steer function(Line 10 of Algorithm 3), they are directly added to the graph.Since bounds on the probability of this event are very relevantto our proof, we have the following lemma.
Lemma P ∞ n =1 P (cid:16) ( T ni = ⌊ ρn ⌋ C i ) c (cid:17) < ∞ Proof.
Since ∞ X n =1 P ( n \ i = ⌊ ρn ⌋ C i ) c = ∞ X n =1 P ( n [ i = ⌊ ρn ⌋ C ci ) ≤ ∞ X n =1 n X i = ⌊ ρn ⌋ ae − bn (23)We conclude that P ∞ n =1 P (( T ni = ⌊ ρn ⌋ C i ) c ) < ∞ for all a, b ∈ R .Lemmas 5 and 6 above lead us to the proof of Lemma 7below. Lemma Define the event A n,m as the event that the ball B n,m contains a vertex of GSE ⋆ after n iterations of the GSE ⋆ algorithm. Define A n , T M n m =1 A n,m , where M n , B n,m etc. areas defined in Def. 4. Then, P (lim inf n →∞ A n ) = 1 . Proof.
We proceed by showing that P ∞ n =1 P ( A cn ) is bounded.Then by the Borel-Cantelli lemma, we have P (lim sup n →∞ A cn ) =0 . Thus, we can directly conclude that P (lim inf n →∞ A n ) = 1 .Let s n be the length of the path σ n . We denote the volumeof the unit ball in R d by ζ d . Let n = argmin n ∈ N ( δ n < δ ) . Inwhat follows, we are concerned only with n for which n > n .Since the distance between the centres of two successive balls(along the curve) in B σ n ,q n ,l n is less than l n , we can concludethat M n < (2 + φ ) s n φ (1 + φ ) δ n , µ ( B n,m ) = ζ d q dn (24)Let ⌊ x ⌋ denote the largest integer less than a given realnumber x . Observe that the event T ni = ⌊ ρn ⌋ C i , denotes theevent that every vertex sampled between iterations ⌊ ρn ⌋ and n is within η distance of an existing vertex of GSE ⋆ . Thisensures that every new vertex of the GSE ⋆ graph does notneed to be steered. Thus, we can treat the vertices in the set V GSE ⋆ n \ V GSE ⋆ ⌊ ρn ⌋ as being uniform random samples that are addeddirectly to the graph generated by GSE ⋆ . Let h , φ (2+ φ ) . Theconditional probability of the event A cn,m conditioned on theevent T ni = ⌊ ρn ⌋ C i ) can therefore be given by P A cn,m | n \ i = ⌊ ρn ⌋ C i ≤ (cid:18) − µ ( B n,m ) µ ( X free ) (cid:19) n −⌊ ρn ⌋ ≤ (cid:18) − ζ d r dn h d µ ( X free ) (cid:19) (1 − ρ ) n (25)We observe that (1 − x ) a ≤ e − ax for real a and x . Thus theright hand side of Eq. (25) is bounded above as follows: (cid:0) − ( ζ d r dn /h d µ ( X free )) (cid:1) (1 − ρ ) n ≤ e ( ρ − nζdrdnhdµ ( X free ) (26) Since | V GSE ⋆ n | ≥ n , we know that for sufficiently large n , log ( card ( V )) card ( V ) ≤ log ( n ) n . Thus, we have the following inequality e (cid:20) ( ρ − nζdrdnhdµ ( X free ) (cid:21) ≤ n (cid:20) − (1 − ρ )( γd GSE ⋆ ζd ) µ ( X free ) hd (cid:21) (27)From Eqs. (25) and (26) along with the definition of A n , P A cn | n \ i = ⌊ ρn ⌋ C i ≤ M n n − (1 − ρ )( γd GSE ⋆ ζd ) µ ( X free ) hd ≤ (2 + φ ) s n n d − (1 − ρ )( γd GSE ⋆ ζd ) µ ( X free ) hd (1 + φ ) φγ GSE ⋆ log ( n ) d (28)In order to ensure that the left hand side of Eq. (28) decreaseswith increasing n and to have the sequence on the right handside of Eq. (28) summable, we require that the exponent of n be less than -1, that is, (cid:2) ((1 − ρ )( γ d GSE ⋆ ζ d ) /µ ( X free ) h d ) − (1 /d ) (cid:3) > (29)If the above criterion is satisfied, the sequence P ∞ n =1 P ( A cn | T ni = ⌊ ρn ⌋ C i ) is summable. Therefore, the condition on γ GSE ⋆ reduces to the following: γ GSE ⋆ > h (cid:20)(cid:18) d (cid:19) (cid:18) µ ( X free )1 − ρ (cid:19)(cid:21) d (30)Lastly, from the definition of conditional probability, we have, P ( A cn | n \ i = ⌊ ρn ⌋ C i ) = P (cid:16) A cn ∩ ( T ni = ⌊ ρn ⌋ C i ) (cid:17) P ( T ni = ⌊ ρn ⌋ C i ) ≥ P A cn ∩ n \ i = ⌊ ρn ⌋ C i ≥ P ( A cn ) − P ( n \ i = ⌊ ρn ⌋ C i ) c (31)Thus, we have: ∞ X n =1 P ( A cn ) ≤ ∞ X n =1 P A cn | n \ i = ⌊ ρn ⌋ C i + ∞ X n =1 P ( n \ i = ⌊ ρn ⌋ C i ) c (32)Following Lemma 6 and Eq. (28), the right hand side ofthe inequality above is summable. Therefore, P ∞ n =1 P ( A cn ) isbounded, and consequently by the Borel-Cantelli lemma, wehave P (lim sup n →∞ A cn ) = 0 . Taking complement on both sides of P (lim sup n →∞ A cn ) = 0 gives P (lim inf n →∞ A n ) = 1 . Observation Observe that the distance between two points x ∈ B n,i and y ∈ B n,i +1 can be bounded as k x − y k ≤ q n + l n ≤ δ n < r n (33)Further, the distance between any point x in a ball B n,i tothe obstacle space is denoted o x . Then o x ≤ δ n − q n ≤ δ n .Thus, for any point x ∈ B n,i , every point in the adjacent ballslies in the shape of the point x . We have shown in Lemma 7that the event A n occurs infinitely often with probability one as n increases to infinity. Since the distance between any twopoints in two consecutive balls is less than r n , two vertices inconsecutive balls are connected. Consequently, we can claimthat the GSE ⋆ algorithm creates a path between the initial andgoal points. Definition 5. (Homogeneous Poisson Point Process ([29]))Let Poisson ( a ) be the Poisson random variable of intensity a . A homogeneous Poisson point process of intensity ν on R d is a random set of countable points P dν ⊂ R d . The set P dν is such that for disjoint and measurable sets S , S ∈ R d ,we have | ( P dν | ∩ S ) = Poisson ( µ ( S ) ν ) and | ( P dν | ∩ S ) = Poisson ( µ ( S ) ν ) with Poisson ( µ ( S ) ν ) and Poisson ( µ ( S ) ν ) independent. Lemma ([29]) Given a sequence of points ( x i ) ∞ i =1 drawnindependently and uniformly from a measurable set L ⊆ R d .The set { x , x , . . . , x Poisson ( τ ) } is the restriction to L of ahomogeneous Poisson point process of intensity τµ ( L ) Let the set of paths in the graph generated by the GSE ⋆ algo-rithm in n iterations be P n . Let us define σ ′ n , argmin σ ′ ∈ P n k σ n − σ ′ k BV . It now remains to be shown that the sequence of paths σ ′ n converges to the path σ ⋆ . Theorem With the sequence ( σ ′ n ) ∞ n =1 defined as above, P ( lim n →∞ k σ n − σ ′ n k BV = 0) = 1 Proof.
We establish this by showing that P ∞ n =1 P ( k σ n − σ ′ n k BV > ǫ ) is finite for any ǫ > . Thus, by a straightforwardapplication of the Borel-Cantelli lemma, we can conclude that P ( lim n →∞ k σ n − σ ′ n k = 0) = 1 . We fix ǫ > , Let α, β ∈ (0 , be two constants independent of n . Define βB n,m to be a ballconcentric with B n,m , and has a radius of βq n . Fig. 5 providesa representation of the covering balls and paths.We define the indicator variables I n,m as follows, I n,m , ( if there is a vertex of GSE* in the ball βB n,m otherwiseWe define K n = P M n m =1 I n,m . This measures the number ofballs βB n,m which do not contain a vertex from the graphconstructed by the GSE ⋆ algorithm. Let L be an upper boundon the lengths of the paths σ n . We can conclude that if K n ≤ αM n , then k σ n − σ ′ n k BV ≤ ( √ α + β (1 − α )) L . Hence, { K n ≤ αM n } ⊆ {k σ n − σ ′ n k BV ≤ √ L ( α + β ) } (34)This allows us to conclude that, P ( { K n ≥ αM n } ) ≥ P ( {k σ n − σ ′ n k BV ≥ √ L ( α + β ) } ) (35)Thus, to prove the theorem, we need to show that P ∞ n =1 P ( { K n ≥ αM n } ) < ∞ . In order to show this, wefirst poissonize the sampling process, to obtain a homogeneousPoisson point process as in Def. 5 and Lemma 8. Subsequentlywe de-poissonize the process to obtain the relevant bounds on P ( { K n ≥ αM n } ) .Consider λ < , independent of n . Consider the randomvariable Poisson ( λn ) . Let the points sampled by GSE ⋆ (in-dependently and uniformly at random) be { x , x , . . . , x n } . X goal X init X Y (a) GSE for m = 4 . Cost: 3.64 X goal X init X Y (b) GSE ⋆ for m = 4 . Cost: 3.61 (c) GSE for m = 4 . Cost: 84.32 (d) GSE ⋆ for m = 4 . Cost: 81.37 Fig. 7: Illustration of the shortest feasible paths (red lines) found by the GSE and GSE ⋆ algorithms in 50 iterations, in 2-Dand 3-D environments with 4 obstacles.
100 200 300 400 500
Iterations C o s t ( m ) c* GSEGSE* (a) Convergence plot for m = 4
100 200 300 400 500
Iterations C o s t ( m ) c* GSEGSE* (b) Convergence plot for m = 16 Fig. 8: Convergence to the optimal cost for the GSE and GSE*algorithmsBy Lemma 8, we can conclude that the set P dλn = { x , x , . . . , x Poisson ( λn ) } is a homogeneous Poisson point pro-cess of intensity λnµ ( X free ) . Let us denote by b K n to be the numberof balls in β B n,q n ,l n = βB n, , βB n, . . . βB n,M n that failto have a point from the set P dλn within radius βq n of theircentres. Notice that the underlying process is different fromthe process for the original variable K n . Relating the originalprocess and the poissonized version (following [30]), P ( K n ≥ αM n ) ≤ P ( b K n ≥ αM n ) + P ( Poisson ( λn ) ≥ n ) (36)We can directly offer the bound P ( { Poisson ( λn ) ≥ n } ) ≤ e − cn where c > is a constant, by the definition of a Poissonrandom variable. In order to bound P ( b K n ≥ αM n ) , observethat for sufficiently small β , the sets βB n,m are disjoint. Thus,the events that each of these balls do not contain a pointfrom P dλn (which is a homogeneous Poisson point process)are mutually independent. The probability p n that any givenset does not contain a ball is therefore given as follows p n = e − λnµ ( X free ) (37)The variable b K n can therefore be compared to a binomialrandom variable (with parameters M n and p n ) as follows P ( b K n ≥ αM n ) ≤ P ( Binomial ( M n , p n ) ≥ αM n ) ≤ e − M n p n (38) Therefore from Eq. (38), we can conclude the following ∞ X n =1 P ( K n ≥ αM n ) ≤ ∞ X n =1 ( e − cn + e − M n p n ) ≤ ∞ (39)Notice that the constants α, β are arbitrary. Consequently wecan set ǫ = √ α + β ) L ) , for arbitrary ǫ > From theinequality (35) and the equation above we know that ∞ X n =1 P ( {k σ n − σ ′ n k BV > ǫ } ) < ∞ (40)By the Borel-Cantelli lemma and the equation above, we have P ( {k σ n − σ ′ n k BV = 0) = 1 .Since lim n →∞ σ n = σ ⋆ from Lemma 4, by application of thetriangle inequality, P ( { lim n →∞ c ( σ ′ n ) = c ( σ ⋆ ) } ) = 1 (41)Thus, the asymptotic optimality of the GSE ⋆ is proved. Observation As a corollary of Theorem 2, we notice that k σ ′ n − σ ⋆ k BV ≤ k σ ′ n − σ n k BV + k σ n − σ ⋆ k BV , and hence the P ( k σ ′ n − σ ⋆ k BV > ǫ ) decays with increasing n for all ǫ > ,exponentially. V. S IMULATION R ESULTS
In this section, we present numerical validation of theproperties of the algorithms GSE and GSE ⋆ analyzed in thepaper. Simulation studies have been carried out in 2-D and3-D environments having 4 and 16 obstacles representinglow to high obstacle densities using MATLAB R2020a onIntel Core i7 2.2GHz processor. Simulations and comparisonstudies related to probabilistic completeness of the GSE havealready been provided in [26] and hence, are omitted herefor brevity. First, feasible shortest paths generated using theGSE and GSE ⋆ algorithms over 50 iterations in the 2-D and3-D environments each having both 4 and 16 obstacles areillustrated in Figs. 7a, 7b and Figs. 7c, 7d, respectively.Throughout the simulations, the cost function is taken tobe the Euclidean path length. Since PRM ⋆ is asymptoticallyoptimal, the approximate optimal cost ( c ⋆ ) is found usingthe PRM ⋆ which is run over 8000 iterations. Figs. 8a and8b depict the convergence of costs to the optimal one for 4and 16 obstacles in a 3-D workspace. For a given number of obstacles, four different randomly generated workspacesare considered. For each workspace, 100 simulations, of 500iterations in each simulation, have been run for each algorithm.The cost obtained is averaged over each set of 100 simulationsfor a given workspace. This average cost is again averagedover the four different workspaces, thus providing the costsdepicted in the convergence plots in Figs. 8a and 8b.The numerical simulations are primarily aimed at demon-strating the performance of the GSE ⋆ algorithm in comparisonwith the original version of the GSE algorithm in terms of theconvergence of cost function to the optimal cost. It is observedthat the GSE ⋆ converges to the optimal cost in around 150and 300 iteration for 4 and 16 obstacles, respectively, whereasthe GSE is found not to converge even in 500 iterations tothe optimal cost. This justifies the claim of asymptotic non-optimality of the GSE in Section III-C and the asymptoticoptimality of the GSE ⋆ algorithm in Section IV-C.VI. C ONCLUSION
The recently-proposed Generalized Shape Expansion (GSE)algorithm was shown to be probabilistically complete. In thispaper, we have extended the study on the GSE to showthat it is not asymptotically optimal by observing its non-zero probability of generating low-quality solutions to thepromenade problem. In order to improve in the context ofasymptotic optimality, a modified GSE algorithm, namely theGSE ⋆ algorithm, has been presented. A detailed mathematicalanalysis has been given to prove both probabilistic complete-ness and asymptotic optimality of the GSE ⋆ . Further, simula-tion studies comparing the performance of these algorithmsin various environments have shown marked improvementin asymptotic convergence to the optimal cost by the GSE ⋆ over the GSE, which justifies the presented theoretical results.The key feature of the generalized shape expansion-basedalgorithms is the use of the generalized shape to generatecollision-free paths. Leveraging these shapes have been foundto greatly reduce the number of iterations required to obtain afeasible path attaining a minimally connected graph. Whilemaintaining the notion of the GSE ⋆ to gain computationaladvantages could be a potential future scope of research.R EFERENCES[1] O. Khatib, “Real-time obstacle avoidance for manipulators and mobilerobots,”
IEEE International Conference on Robotics and Automation ,pp. 500–505, 1985.[2] Y. Koren and J. Borenstein, “Potential field methods and their inherentlimitations for mobile robot navigation,”
International Conference onRobotics and Automation , pp. 1398–1404, 1991.[3] P. Fiorini and Z. Shiller, “Motion planning in dynamic environments us-ing velocity obstacles,”
The International Journal of Robotics Research ,vol. 17, no. 7, pp. 760–772, 1998.[4] J. Van Den Berg, S. J. Guy, M. Lin, and D. Manocha, “Reciprocal n-body collision avoidance,” in
Robotics research . Springer, 2011, pp.3–19.[5] A. Chakravarthy and D. Ghose, “Collision cones for quadric surfaces,”
IEEE Transactions on Robotics , vol. 27, no. 6, pp. 1159–1166, 2011.[6] A . Chakravarthy and D . Ghose, “Generalization of the collision coneapproach for motion safety in 3-d environments,”
Autonomous Robots ,vol. 32, no. 3, pp. 243–266, 2012.[7] J. Wilhelm, G. Clem, and D. Casbeer, “Circumnavigation and obstacleavoidance guidance for uavs using gradient vector fields,” in
AIAAScitech 2019 Forum , 2019, p. 1791. [8] Q. Gong, R. Lewis, and M. Ross, “Pseudospectral motion planning forautonomous vehicles,”
Journal of Guidance, Control, and Dynamics ,vol. 32, no. 3, pp. 1039–1045, 2009.[9] S. Ghosh, D. Davis, and T. Chung, “A guidance law for avoidingspecific approach angles against maneuvering targets,”
IEEE Conferenceon Decision and Control, Las Vegas, USA , pp. 4142–4147, 2018.[10] S. Ghosh, , O. A. Yakimenko, D. T. Davis, and T. H. Chung, “Unmannedaerial vehicle guidance for an all-aspect approach to a stationary point,”
Journal of Guidance, Control, and Dynamics , vol. 40, no. 11, pp. 2871–2888, 2017.[11] R. Brooks and T. Lozano-Perez, “A subdivision algorithm in configu-ration space for findpath with rotation,”
IEEE Transactions on Systems,Man, and Cybernetics , vol. SMC-15, pp. 224–233, 1985.[12] J. S. B. Mitchell, G. Rote, and G. Woeginger, “Minimum-link pathsamong obstacles in the plane,”
Algorithmica , vol. 8, no. 1, pp. 431–459,Dec 1992.[13] L. Kavraki, P. Svestka, J. Latombe, and M. H. Overmars, “Probabilisticroadmaps for path planning in high- dimensional configuration spaces,”
Transactions on Robotics and Automation , vol. 12, no. 4, pp. 566–580,1996.[14] S. Karaman and E. Frazzoli, “Sampling-based algorithms for optimalmotion planning,”
The international journal of robotics research , vol. 30,no. 7, pp. 846–894, 2011.[15] Y. Chen, M. Cutler, and J. P. How, “Decoupled multiagent path planningvia incremental sequential convex programming,” in
IEEE InternationalConference on Robotics and Automation . IEEE, 2015, pp. 5954–5961.[16] F. Augugliaro, A. P. Schoellig, and R. D’Andrea, “Generation ofcollision-free trajectories for a quadrocopter fleet: A sequential convexprogramming approach,” in
International Conference on IntelligentRobots and Systems . IEEE, 2012, pp. 1917–1922.[17] S. Liu, M. Watterson, K. Mohta, K. Sun, S. Bhattacharya, C. J. Taylor,and V. Kumar, “Planning dynamically feasible trajectories for quadrotorsusing safe flight corridors in 3-d complex environments,”
IEEE Roboticsand Automation Letters , vol. 2, no. 3, pp. 1688–1695, 2017.[18] S. M. LaValle and J. J. Kuffner Jr, “Randomized kinodynamic planning,”
The international journal of robotics research , vol. 20, no. 5, pp. 378–400, 2001.[19] L. Janson, E. Schmerling, A. Clark, and M. Pavone, “Fast marching tree:a fast marching sampling-based method for optimal motion planning inmany dimensions,” 2015.[20] F. Baldini, S. Bandyopadhyay, R. Foust, S.-J. Chung, A. Rahmani, J.-P.de la Croix, A. Bacula, C. M. Chilan, and F. Hadaegh, “Fast motionplanning for agile space systems with multiple obstacles,” in
AIAA/AASastrodynamics specialist conference , 2016, p. 5683.[21] V. Zinage and S. Ghosh, “An efficient motion planning algorithm foruavs in obstacle-cluttered environment,”
American Control Conference,IEEE, Philadelphia, PA , pp. 2271–2276, July 2019.[22] V. V. Zinage and S. Ghosh, “Generalized shape expansion-based motionplanning in three-dimensional obstacle-cluttered environment,”
Journalof Guidance, Control, and Dynamics , pp. 1–11, 2020.[23] V. Zinage and S. Ghosh, “Generalized shape expansion-based motionplanning for uavs in three dimensional obstacle-cluttered environment,”in
AIAA Scitech 2020 Forum , 2020, p. 0860.[24] V . Zinage and S . Ghosh, “Directional sampling-based generalized shapeexpansion for accelerated motion planning in 2-d obstacle-clutteredenvironments,”
IEEE Control Systems Letters , vol. 5, no. 3, pp. 1067–1072, 2020.[25] L. Kavraki, M. Kolountzakis, and J. Latombe, “Analysis of probabilisticroadmaps for path planning,”
IEEE Transactions on Robotics andAutomation , vol. 14, no. 1, pp. 166–171, 1998.[26] A. Ramkumar, V. Zinage, and S. Ghosh, “On probabilistic completenessof the generalized shape expansion-based motion planning algorithm,”in accepted at Conference on Decision and Control . IEEE, 2020.[27] O. Nechushtan, B. Raveh, and D. Halperin, “Sampling-diagram au-tomata: A tool for analyzing path quality in tree planners,” vol. 68,01 2010, pp. 285–301.[28] O . Nechushtan, B . Raveh, and D . Halperin, “Sampling-diagram au-tomata: A tool for analyzing path quality in tree planners,” in
AlgorithmicFoundations of Robotics IX . Springer, 2010, pp. 285–301.[29] S. Chiu, D. Stoyan, W. Kendall, and J. Mecke,
Stochastic Geometry andIts Applications , ser. Wiley Series in Probability and Statistics. Wiley,2013.[30] M. Penrose,