A Continuation Method for Large-Scale Modeling and Control: from ODEs to PDE, a Round Trip
11 A Continuation Method for Large-Scale Modelingand Control: from ODEs to PDE, a Round Trip
Denis Nikitin, Carlos Canudas-de-Wit and Paolo Frasca
Abstract —In this paper we present a continuation methodwhich transforms spatially distributed ODE systems into continu-ous PDE. We show that this continuation can be performed bothfor linear and nonlinear systems, including multidimensional,space- and time-varying systems. When applied to a large-scale network, the continuation provides a PDE describingevolution of continuous state approximation that respects thespatial structure of the original ODE. Our method is illustratedby multiple examples including transport equations, Kuramotoequations and heat diffusion equations. As a main example, weperform the continuation of a Newtonian system of interactingparticles and obtain the Euler equations for compressible fluids,thereby providing an original alternative solution to Hilbert’s6th problem. Finally, we leverage our derivation of the Eulerequations to control multiagent systems, designing a nonlinearcontrol algorithm for robot formation based on its continuousapproximation.
Index Terms —Control of Large-Scale Networks, MultiagentSystems, PDE
I. I
NTRODUCTION M OST of the systems we encounter in real life consist ofsuch a large number of particles that the direct analysisof their interaction is impossible. In such cases, simplifiedmodels are used that aggregate the behavior of a set of particlesand replace them with a continuous representation.The first model describing a system of moving and in-teracting particles was the Euler equation for liquids andgases which worked with a continuous field [1]. This equationwas one of the first
Partial Differential Equations (PDEs).Since then, many PDEs have been created that are models fordescribing different physical processes, such as the Maxwellequation for electromagnetic field [2].With the advent of computers more and more attention hasbeen paid to the discretization, that is, the process of trans-forming PDE into a system of ordinary differential equationsin order to be able to numerically solve them [3]. A correctdiscretization satisfies the condition that when the size of theresulting ODE system increases, its solution will converge tothe original PDE. The choice of an apparent discretizationmethod depends heavily on the PDE type such that the solutionwould preserve the properties of PDE [4].Although discretization is a widely known and widelyused method, the inverse problem of transforming an ODEsystem into PDE is more rarely studied. Our work focuses onthis particular problem, with the aim of filling this gap andproviding a counterpart to the discretization procedure. Thiscan be useful since PDEs provide a much more compact way
D. Nikitin, C. Canudas-de-Wit and P. Frasca are with Univ. Grenoble Alpes,CNRS, Inria, Grenoble INP, GIPSA-lab, 38000 Grenoble, France. of describing the system, which in many cases is easier toanalyze analytically than the corresponding ODE system.The idea of replacing the system with its compact andsimplified representation is widely used, especially for theODE systems describing large-scale networks. Probably themost known approach of this type is a model reduction technique which transforms a network into a smaller one whileconserving the properties and the dynamics, see [5], [6]. Alsothe reduction towards the average state was studied in [7], [8].Apart from various model reduction techniques large-scalenetworks are studied by mean field methods in case of the all-to-all interaction topology. In this situation the effect ofthe network on each node is the same, therefore it is enoughto use an equation for a single agent together with parametersof a state of the whole network, see [9] for a review withapplication to Kuramoto networks.The idea of mean field can be further extended to tracknot only a single agent’s state, but the whole probabilitydistribution over all agents’ states in the network. This methodis called population density approach [10] and is mostly usedto model large biological neural networks. In other fields ofphysics it is described as a probability density of a physicalsystem’s state and is constructed by projection to probablisticspace, see [11].Large-scale systems can be also simplified by studying theapproximations to their probability densities, represented by moments . E.g., in [12], [13] a moment-based approach wastaken to control crowds dynamics. Different applications andissues of the method of moments are covered in [14]. Also shape parameters can be used to simplify the model anddescribe a shape of the solution as in [15].Mean field and population density approaches are suitablein the case when the interaction topology between nodes is all-to-all. In other cases, the continuous representation of a systemrequires more sophisticated tools. A recently emerged theoryof graphons studies graph limits, i.e. structural propertiesthat the graph possesses if the number of nodes tends toinfinity while preserving interaction topology, see [16]. Usinggraphons it is possible to describe any dense graph as a linearoperator in continuum space [17]. This method was furtherused to control large-scale linear networks [18] and to studysensitivity of epidemic networks [19].However, we are interested in systems that are spatiallydistributed and which have a position-dependent interaction,such as traffic in the city, power networks, robot formations,etc. By applying population density method or graphon theoryto such system we would end up with a continuous modelwhich looses the spatial structure of the problem.Our idea is to replace the original spatially distributed ODE a r X i v : . [ ee ss . S Y ] J a n ODE continuation PDEcontroldesigncontrolled PDEdiscretizationcontrolled ODE
Figure 1. Proposed framework for control design based on the continuationmethod and a continuous representation of the system. system by a continuous PDE whose state and space variablespreserve the state and space variables of the original system.We develop a method for linear spatially invariant ODEs whichtransform them into PDEs with the help of finite differences.We name this method as a continuation , since it is exactlyopposite to the discretization procedure. Further we show howthe continuation converges to the original system in senseof spectrum. Using computational graph formalism [20] weextend the method to nonlinear systems and further to space-dependent systems and systems with boundaries.The continuation method allows to recover a PDE whichdescribes the same physical system as the original ODEnetwork. Such a description can be very helpful both foranalysis and control design purposes. Indeed, one can use anobtained PDE to design a continuous control which, beingdiscretized back, results in a control law for the original ODEsystem: this design framework is illustrated in Fig. 1.Moving to the examples of the application of the continua-tion method, we tackle the Hilbert’s 6th problem, questioninghow one can rigorously transform a system of interactingparticles into the Euler PDE. We provide our treatment of thisproblem using continuation, deriving the Euler PDE from asystem of Newton’s laws for the case of long-range interactionforces. Further, we show how the method can be applied tothe multiagent control, providing a simple control algorithmto stabilize a robotic formation along the desired trajectory,performing a maneuver of passing through a window. Thecontrol is derived on a level of a PDE representation and then itis discretized to be implemented on every agent in accordancewith the scheme in Fig. 1.We start in Section II by defining a continuation for linearODEs, discussing questions of accuracy, convergence andchoice of the particular model. Section III continues to non-linear models, utilizing the computational graph formalism.In Section IV the method is extended to much broader classof systems, including multidimensional or space- and time-varying systems and also discussing boundary conditions.Section V is devoted to a derivation of the Euler PDE from asystem of interacting particles. Finally, Section VI applies themethod to control a robotic formation. ρ i ρ i + s ρ i + s ρ i + s j a a a j xx i − x i x i + x i + s j ∆ x ∆ x Figure 2. System of nodes aligned in 1D line with dynamics given by (1)with s = − s = II. M
ETHOD FOR LINEAR SYSTEMS
The simplest class of systems for which the transformationof ODE into PDE can be performed is given by linear ODEsystems corresponding to the dynamics of states of nodes,which are aligned on the 1D line in space and depend onlyon some fixed set of their neighbours. Let the node i have astate ρ i ∈ R and a geographical position x i ∈ R such that forevery i the distance between two consecutive nodes in space isconstant, x i + − x i = ∆ x (the assumption of ∆ x being constantwill be relaxed later on). Then the systems of our interest takethe form ˙ ρ i = N ∑ j = a j ρ i + s j , (1)that is ˙ ρ i linearly depends only on N neighbouring nodesshifted by s j ∈ Z for j ∈ { .. N } , and a j ∈ R are the systemgains, see Fig. 2. This type of systems belongs to the class oflinear spatially invariant systems [21], which is a natural classfor distributed control. A. Motivating Example
We start by considering the most simple ODE system ofclass (1) which has spatial dependence:˙ ρ i = ∆ x ( ρ i + − ρ i ) . (2)Comparing with (1), here N = a = / ∆ x , a = − / ∆ x , s = s =
0. This equation describes a transport of somequantity along the line, and is usually referred as a TransportODE. Equation (2) is sometimes studied on its own, but oftenit comes as a result of a discretization process applied toanother equation, ∂ ρ∂ t = ∂ ρ∂ x . (3)This equation belongs to a class of PDEs, which is usuallythought to be more difficult class of equations to study thanODEs. However, equation (3) describes a perfect transport ofinformation with finite propagation speed along the line, whichcan be studied much more easily in PDE form than in ODE,as it perfectly conserves the form of a solution, performingonly a shift along the line as time increases. We will refer tothis equation as a Transport PDE.Equation (2) can be obtained from (3) by the discretizationprocess, which has been a well-established mathematical toolfor centuries. Nevertheless, up to now there was no strictprocedure describing a general process which could render equation (3) from (2). In the next subsections we explore morehow the general discretization procedure is defined for linearsystems and how it should be inverted to obtain a continuationprocess. B. Discretization
The discretization of PDEs is usually performed by afinite difference method, where the partial derivatives areapproximated by finite differences. For example, in the caseof Transport ODE, ∂ ρ∂ x ≈ ∆ x ( ρ i + − ρ i ) . This approximation is valid when ∆ x is small. Indeed, assum-ing that the solution to PDE is given by a smooth function ρ ( x ) and using Taylor series, we can write ρ i + = ρ ( x i + ) = ρ ( x i ) + ∂ ρ∂ x ∆ x + ∂ ρ∂ x ∆ x + ..., (4)where all partial derivatives are calculated in x i . Thus, sub-tracting ρ i and dividing by ∆ x , we get ∂ ρ∂ x = (cid:20) ∆ x ( ρ i + − ρ i ) (cid:21) − ∂ ρ∂ x ∆ x − ..., (5)which means that the residual belongs to the class O ( ∆ x ) of allfunctions which go to zero at least as fast as ∆ x . Thus, taking ∆ x sufficiently small, one can ensure the arbitrary accuracyof the approximation, provided all the partial derivatives arebounded.Accuracy can be further increased by taking different pointswhere the function is sampled, called stencil points. Forexample, writing ρ i − = ρ ( x i − ) = ρ ( x i ) − ∂ ρ∂ x ∆ x + ∂ ρ∂ x ∆ x − ..., (6)subtracting (6) from (4) and dividing by 2 ∆ x , we get ∂ ρ∂ x = (cid:20) ∆ x ( ρ i + − ρ i − ) (cid:21) − ∂ ρ∂ x ∆ x + .... (7)Thus, using stencil points { i − , i + } to approximate the first-order derivative in the point i the obtained residual belongs tothe class O ( ∆ x ) , which means that this discretization of theTransport PDE is accurate to the second order.In general, if one wants to approximate the derivative oforder m in point i using N stencil points { i + s , i + s , ..., i + s N } with m < N in form ∂ m ρ∂ x m ≈ N ∑ j = ˆ a j ρ i + s j (8)where coefficients ˆ a j are unknown, one can define S N , N ∈ R N × N , ˆ a ∈ R N and c ∈ R N by S N , N = · · · s · · · s N ... . . . ... s N − · · · s N − N , ˆ a = ˆ a ˆ a ...ˆ a N , c = m ! ∆ x m , where c has 1 on the position m +
1, and solve a linear systemˆ a = S − N , N c . (9)The system (9) can be trivially obtained by writing Taylorseries for all points ρ i + s ... ρ i + s N and summing them in a linearcombination as in (8). The obtained order of accuracy is atleast O ( ∆ x ( N − m ) ) , and sometimes can be higher if some of thehigher derivatives are also eliminated (as in case of (7)). C. Continuation
Essentially the same process can be applied to the equation(1) to get the PDE version. For every term in the summationin (1) we can write ρ i + s j = ρ ( x i + s j ) = ρ ( x i ) + ∂ ρ∂ x ∆ xs j + ∂ ρ∂ x ∆ x s j + ... (10)Thus, assume we state the problem of finding the PDEapproximation of (1) in form N ∑ j = a j ρ i + s j ≈ d ∑ k = c k ∆ x k k ! ∂ k ρ∂ x k , (11)where d is the highest order of derivative ( order of continu-ation ) we want to use. Note that zero is also included in theright summation, since the function itself can be used in theresulting PDE. Then, introducing S d + , N ∈ R ( d + ) × N , a ∈ R N and c ∈ R d + by S d + , N = · · · s · · · s N ... . . . ... s d · · · s dN , a = a a ... a N , c = c c ... c d , (12)the vector of unknown coefficients c can be found by directmultiplication, c = S d + , N a , or c k = N ∑ j = a j s kj ∀ k ∈ { , .., d } . (13)Once (13) is solved, we write the PDE approximation to (1): ∂ ρ∂ t = d ∑ k = c k ∆ x k k ! ∂ k ρ∂ x k . (14)As an example, applying (13) to the Transport ODE (2) renderscoefficients c = c k = / ∆ x for all k >
0, and choosing d = D. Accuracy of continuation
Procedures (9) and (13) look very similar from the algebraicpoint of view, however they are qualitatively different in theway how the problem is formulated and how we shouldinterpret their results.The discretization procedure tries to find the best ap-proximation to a continuous and smooth function ρ and itsderivatives. What is most important, the discretization step ∆ x is usually an adjustable parameter which can be set by a systemengineer arbitrarily small to satisfy the desired performance.Thus the notion of accuracy of a discretization is used todescribe how fast the solution of the discretized equation tends to the solution of the original equation when ∆ x tends to zero.In some sense this means quality of the discretization, sincethe higher order of accuracy means that the engineer can takea larger ∆ x to achieve the same error and thus use the smallernumber of states in the discretized system.Instead, when the original system is given by the ODE,the nodes have fixed locations, thus ∆ x is a true constant representing properties of an underlying physical system andit cannot be changed by an engineer. In turn this means thatthe accuracy defined as a class O ( ∆ x ( N − m ) ) cannot measurequality of the approximation as ∆ x does not behave as anarbitrarily small value. Moreover, in the ODE case the systemstate ρ i is known only on a given set of points i , thus in generalthe approximation ρ ( x ) can be non-smooth or discontinuous .Even if we assume the smoothness at the initial moment oftime, the dynamics can render its derivatives unbounded. Asa result the series in (10) can be non-convergent.We know however that the systems (1) and (14) are con-nected by finite difference methods (9) and (13). We will usethis fact as a definition of a PDE approximation to an ODEand of an accuracy of this approximation. Definition 1.
Discretization of PDE to ODE is called valid ifit is performed according to the finite difference method (9).
Definition 2.
Continuation of ODE to PDE is called valid ifthere exists a valid discretization of the obtained PDE to theoriginal ODE.Definitions 1 and 2 say that if we use continuation onsome ODE and then perform discretization at the same stencilpoints, we should arrive at the same ODE. This procedure setsa constraint on the minimum order of the PDE:
Theorem 1.
The PDE (14) that is obtained from the ODE (1) with N stencil points is valid if and only ifd + (cid:62) N . (15) Proof of necessity:
Indeed, assume d + < N . This meansthat the PDE approximation (14) has the highest derivativeat most of the order d . Thus we can augment the vector ofcoefficients c by N − d − S d + , N with N − d − c was defined by (13). Now,applying the discretization process (9) to c we should arriveat the same vector a of the parameters of the ODE system.Since this should be true for any a , we substitute S − N , N andaugmented S d + , N and obtain a condition ··· s s ··· s N ... ... ... ... s N − s N − ··· s N − N − ··· s s ··· s N ... ... ... ... s d s d ··· s dN ··· = I , which is impossible to satisfy since the second matrix issingular. Therefore there is no valid discretization processfor the PDE obtained by continuation with order d such that d + < N , which by definition means that such continuationis not valid. Proof of sufficiency:
Case d + = N is trivial, since theequations (9) and (13) are equivalent in this situation. Thisobviously provides a validity of the continuation procedure.Now assume d + > N . Then the obtained vector ofcoefficients c is of higher dimension than a . The discretization(9) cannot be applied directly, since there is not enough stencilpoints to express the finite differences for the derivatives oforder higher than N −
1. However we can increase the setof stencil points. Let us choose additional d + − N stencilpoints ¯ s N + , ¯ s N + , ..., ¯ s d + . Applying continuation (13) to theoriginal ODE (1) and then (9) to the obtained PDE using theaugmented set of stencil points we get a new ODE gains ¯ a which are expressed as¯ a = ··· ··· s ··· s N ¯ s N + ··· ¯ s d + ... ... ... ... ... ... s d ··· s dN ¯ s dN + ··· ¯ s dd + − ··· s ··· s N ... ... ... s d ··· s dN a . It is now clear that the first N elements of ¯ a are exactly a andthe rest is zero, irrespective of the chosen additional points ¯ s j .Thus the artificially introduced stencil points do not appear inthe discretized PDE, rendering the same ODE as the originalone.The latter part of the proof of Theorem 1 means that thePDE obtained by the process (13) with d + > N has moreinformation that one with d + = N , since it provides exactTaylor approximations not only on the given set of points,but in the additional d + − N points which can be chosenarbitrary. This property can be used to define the order ofaccuracy as d + − N . Definition 3.
Order of accuracy of a valid continuation processof ODE to PDE is defined as the number of additional pointsin which the corresponding discretization process can be madeexact simultaneously, i.e. d + − N .For example, a continuation ρ i + − ρ i → ∆ x ∂ ρ∂ x is of order of accuracy 0, since trying to discretize the PDEon any larger set of stencil points except from { i , i + } willgive different ODE. At the same time a continuation ρ i + − ρ i − → ∆ x ∂ ρ∂ x + (cid:18) · ∂ ρ∂ x (cid:19) has order of accuracy 1 because the second derivative vanishes(thus d = { i − , i , i + } . From now on we will consider onlyvalid continuations, and most often in practice we will usecontinuations of order of accuracy 0. E. Convergence of continuation
It is clear that the higher order of continuation is taken, thebetter the original ODE operator (1) is approximated by thePDE (14). It is possible to study the convergence propertiesby shifting the problem to the frequency domain using theFourier transform.
Let us define the function a ( x ) as a ( x ) = N ∑ j = a j δ ( x + s j ∆ x ) , where δ ( x ) is the Dirac delta function. Therefore, for anyintegrable function ρ ( x ) the equation (1) is equivalent to thefollowing system with convolution˙ ρ ( x ) = ( a (cid:63) ρ )( x ) . (16)Use now the Fourier transform, defined asˆ f ( ω ) = (cid:90) ∞ − ∞ f ( x ) e − ix ω dx (17)for any integrable function f ( x ) and for any frequency ω ∈ R . It is known that the Fourier image of a convolution is amultiplication. Therefore the system (16) is just˙ˆ ρ ( ω ) = ˆ a ( ω ) ˆ ρ ( ω ) , ˆ a ( ω ) = N ∑ j = a j e is j ∆ x ω , (18)where ˆ a ( ω ) was found by direct calculation of Fourier trans-form. (18) immediately shows that the spectrum of the originalODE system is parametrized by ˆ a ( ω ) . In fact, this result iswell-known, since the system (1) on an infinite line belongsto the class of Laurent systems, whose spectrum is known tobe (18), see [22].Now let us calculate the spectrum of the continualizedsystem (14). By another property of the Fourier transform,if the function ρ ( x ) is sufficiently smooth and its derivativesare integrable, we can recover their Fourier images by (cid:92) (cid:18) ∂ k ρ∂ x k (cid:19) ( ω ) = ( i ω ) k ˆ ρ ( ω ) . Therefore (14) is read in frequency domain as˙ˆ ρ ( ω ) = ˆ c ( ω ) ˆ ρ ( ω ) , ˆ c ( ω ) = d ∑ k = c k ∆ x k k ! ( i ω ) k . (19)Substituting (13), we can rewrite (19) asˆ c ( ω ) = N ∑ j = a j d ∑ k = ( is j ∆ x ω ) k k ! . (20)Now, comparing (20) with (18), it is clear that (20) uses thefirst d + Theorem 2.
The spectrum of the PDE (14) converges to thespectrum of the original ODE (1) pointwise as d → ∞ . However, the convergence of spectrums is not uniform.Moreover, the spectrum (18) is an image of the unit circleand thus is a compact set, while the spectrum (20) for any d is a polynomial and thus unbounded. It is still possible to stateseveral corollaries of Theorem 2: Corollary 1.
For any bounded subset of frequencies thespectrum of the PDE (20) converges to the spectrum of theODE (18) uniformly.
Figure 3. Spectrum for the Transport ODE (21) e i ω − d increases, spectrums converge to the blue circle, however for some orders(such as 4 or 5) they can become unstable. Proof.
The result follows directly from Theorem 2 and the factthat the Taylor expansion of any analytic function convergesto the function itself uniformly on any bounded domain.
Corollary 2.
If the original ODE (1) is unstable, there existsD (cid:62) such that for all d (cid:62) D the continualized system (14) will also be unstable.Proof.
Since the original system is unstable, there exists ω such that Re ˆ a ( ω ) >
0. Now, by the definition of a limit thereexists D (cid:62) d (cid:62) D Re ˆ c ( ω ) > d . The reason forthis is in the unboundedness of the polynomial spectrum (20).Even if the original system is stable, for some k it is possibleto introduce artificial instability on high frequencies. Thereare however some guidelines which can be used to properlychoose the order of continuation d : Corollary 3.
PDE (14) with an odd order of continuation dhas the same stability properties as a PDE with the order ofcontinuation d − .Proof. All odd terms in the spectrum (20) are purely imagi-nary and thus have no impact on the stability.
Corollary 4.
Artificial instability is introduced when the lasteven term in the PDE (14) has c k > if k = m or c k < ifk = m + for some m ∈ Z + .Proof. Artificial instability comes if the term of the polyno-mial (20) with the highest even power is positive, which leadsto a positive real part of a spectrum on high frequencies.Positivity of the highest even term is exactly equivalent to the statement of the corollary since i m = i m + = − m ∈ Z + .We will demonstrate the convergence of spectrums on theTransport ODE ˙ ρ i = ρ i + − ρ i . (21)Assuming ∆ x =
1, the continuation of (21) is: ∂ ρ∂ t = d ∑ k = k ! ∂ k ρ∂ x k , (22)Spectrum of (21) by formula (18) is given by e i ω −
1, which isdepicted as a blue circle in Fig. 3 together with the spectrumsof the continuations up to the order d =
6. It is clear that asthe order increases, the approximations become better.The original Transport ODE is stable. Moreover, it hasan intrinsic diffusion in it, which can be captured by thecontinuation of the second order. However, the continuationof order 4 is unstable. It happens because of an artificialinstability as described in Corollary 4, since c = >
0. Ingeneral all stable continuations of the Transport ODE are givenby the orders { , , , ..., m + , m + , ... } for all m ∈ Z + .III. M ETHOD FOR NONLINEAR SYSTEMS
Finite differences give us a complete tool for linear sys-tems, but for nonlinear systems they should be applied incomposition with nonlinearities. Using an additional conceptof computational graph it is possible to elaborate the case ofgeneral nonlinear ODE systems.As in the previous case we assume without loss of generalitythat the nodes are equally spaced along the 1D line, a node i having a state ρ i and a position x i . Then the general nonlinearODE with space dependence takes form of˙ ρ i = F ( ρ i + s , ρ i + s , ..., ρ i + s N ) . (23)We further assume that the function F is continuous. A. Computational graph
In 1957 Kholmogorov [23] showed that every multidimen-sional continuous function can be written as a composition offunctions of one variable and additions. This work laid thebasis for the neural networks function approximation, whichis now a major branch of modern machine learning.Here we will use this idea and assume that the function F is given in the form of computational graph (see [20] forexample). This is a directed acyclic graph, every node of whichrepresents a one-dimensional function, applied to a weightedsum of inputs coming to this node. We assume that the leavesof this graph are the states of the system ρ i + s j and the rootnode computes the resulting value of F .As an example of the computation graph we will considera system ˙ ρ i = sin ( ρ i + − ρ i ) − sin ( ρ i − ρ i − ) (24)which is a system of Kuramoto oscillators coupled on a ring.The computational graph for (24) is presented in Fig. 4. resultsin sin ρ i − ρ i ρ i ρ i + -1 1-1 1 -1 1 i − i + Figure 4. Computation graph for the system (24). Similar subgraphs areoutlined by dashed rectangles of the same color. Possible choices of sinussubgraph’ positions are written in the corners of blue rectangles.
B. Similar subgraphs and their positions
Now let us introduce an original notion of similar sub-graphs . Subgraph is a computational graph which computessubexpression of the original computational graph. Every nodein a computational graph serves as the root of a subgraphcomputing expression defined in this node. The leaf nodes arealso the subgraphs ”computing” themselves.
Definition 4.
We call two subgraphs similar if1) they serve as an input to the same node,2) they differ only in the positions of the leaf nodes, andthis difference can be represented by a single shift.This is an equivalence relation, therefore we can speak aboutequivalence classes which we call sets of similar subgraphs.For example, in Fig. 4 there are three sets of similarsubgraphs:1) ρ i − and ρ i for the left sinus node,2) ρ i and ρ i + for the right sinus node,3) sin ( ρ i − ρ i − ) and sin ( ρ i + − ρ i ) for the root node,because they differ by a single shift which equals 1.The last thing which should be defined is a position of asubgraph: Definition 5.
Position of a subgraph is defined as a coordinatein space where the expression of this subgraph is calculated.The leaf nodes by definition are the states of the system,thus they are calculated at some positions on the line. Forexample the leaf node ρ i + in Fig. 4 is defined in the point x i + , thus we will say that its position is i + i , since it isexactly the position of the left-hand side term in (23). Wewill define the positions of other subgraphs as the averageof their leaves’ positions. Note that in general there is somefreedom in the definition of the subgraphs’ positions, with theonly constraint that similar subgraphs should differ by a singleshift, but we will omit this for simplicity.Since the position of a subgraph represents a position on theline, it is natural to have non-integer position values, althoughthe leaf nodes and the root have only integer positions. Asan example, defining the position as an average, in Fig. 4 thenode sin ( ρ i + − ρ i ) has its position i + / C. Continuation to a nonlinear PDE
When system (23) is expressed in a form of computationalgraph with similar subgraphs being found and their positionsbeing defined, one can perform a continuation proceduredescribed in section II-C to obtain a PDE.Continuation should be performed recursively, starting fromthe leaves. Each set of similar subgraphs by definition is usedin their common ancestor node as a linear combination ofequivalent elements shifted by some distance. Continuationof this linear combination by (11) replaces the set of similarsubgraphs by a weighted sum of partial derivatives of subex-pressions, calculated at the position of the ancestor node.Let ∆ x = x i + − x i be a distance between any two neigh-bouring nodes. Elaborating example (24) and using order ofaccuracy 0, we perform the continuation in three steps:1) sin ( ρ i + − ρ i ) → sin (cid:18) ∆ x ∂ ρ∂ x ( x i + / ) (cid:19) ,2) sin ( ρ i − ρ i − ) → sin (cid:18) ∆ x ∂ ρ∂ x ( x i − / ) (cid:19) ,3) sin i + / − sin i − / → ∆ x ∂∂ x sin.which finally gives a nonlinear PDE representation of (24): ∂ ρ∂ t = ∆ x ∂∂ x sin (cid:18) ∆ x ∂ ρ∂ x (cid:19) . (25)Speaking about the accuracy, the simplest possible PDE canbe obtained by choosing orders of accuracy equal to 0 foreach set of similar subgraphs. For more accurate equations itmakes sense to specify the desired order of the equation d andthen get rid of all the terms which consist of composition ofderivatives of combined order higher than d .IV. E XTENSIONS
Until now we discussed systems with nodes which wereuniformly placed on the infinite 1D line and which hadcommon space-independent dynamics. The method can beextended to include more classes of systems.Spatially invariant systems [21] such as periodic ones canbe tackled by choosing different index spaces. In the periodiccase we can assume that the positions x ∈ S are placed on theunit circle and indices i ∈ Z / n Z form a ring of integers modulo n , where n is the number of states of the original ODE. Sinceany function on S can be mapped to a periodic function on R ,the analysis in Sections II and III remain the same.Further the time dependence can be introduced into systemgains both in the ODE and in the PDE, where the continuationis performed independently for every fixed t . This allows to usethe method for time-varying systems and switching networks.Also systems whose state is vector-valued can be continualizedusing the same finite differences based scheme, thus in thefollowing we will assume that the state of a system is scalar.In the following subsections we will explore how themethod can be extended to include systems with several spatialdimensions, systems with space dependence or nonuniformplacing and systems with boundaries. Finally we introduce aconcept of PDE with index derivatives which can be appliedto systems whose states coincide with the positions in space,for example particle systems. A. Multidimensional systems
Let a position of a node ρ i be described by x i ∈ R n . More-over, a node ρ i is referenced by a multi-index i = ( i , ..., i n ) ∈ Z n . We assume that the position difference between two neigh-bour nodes i = ( i , ..., i k , ..., i n ) and i (cid:48) = ( i , ..., i k + , ..., i n ) is x i (cid:48) − x i = ( , ..., ∆ x k , ..., ) ∀ k ∈ { .. n } , and that there exists a vector ∆ x = ( ∆ x , ..., ∆ x k , ..., ∆ x n ) . We shall describe the extension for linear systems, asnonlinear systems can be dealt with in an analogous way bythe same computational graph concept as in Section III. Thuswe start from the linear system (1). For nonnegative multi-index h we define an absolute value | h | = ∑ nk = h k . Further,we define multi-index power h of a vector x as x h = ∏ nk = x h k k ,with an assumption 0 =
1. By Taylor series ρ i + s j = ρ i + ∑ | h | = s hj ∆ x h ∂ ρ i ∂ x h + ∑ | h | = s hj ∆ x h ∂ ρ i ∂ x h + ..., (26)and the continuation up to the order d thus is ∂ ρ∂ t = ∑ h ∈ Z n + | h | (cid:54) d c h ∆ x h | h | ! ∂ | h | ρ∂ x h , c h = N ∑ j = a j s hj . (27)Using S instead of R , more complex multidimensional spacescan be covered such as torus or cylinder. B. Space-dependent systems
Let us now look on the linear system (1) with one importantdifference: the system gains a j , the shifts s j and the numberof neighbours N become space-dependent:˙ ρ i = N i ∑ j = a i j ρ i + s ij . (28)Notice that equation (28) describes in fact any linear system.Now, choosing a unique d such that d + (cid:62) N i for all i , one can perform a continuation (13) at every point x i upto the order d and obtain a PDE (14) with space dependentgains c ik . This means that we know the gains c ik at the pointswith coordinates x i , which can be seen as a sampling of somefunction c k ( x ) at points x i .We can now perform either an interpolation or an approx-imation based on this sampling. In the first case we seek for c k ( x ) such that c k ( x i ) = c ik , while in the second case it isenough to satisfy this relation approximately, for example bydetermining c k ( x ) via least squares. In either case, the resultingcontinuation of (28) is given by ∂ ρ∂ t = d ∑ k = c k ( x ) ∆ x k k ! ∂ k ρ∂ x k . (29)In the case of nonlinear systems the continuation canbe performed if the computational graphs for every nodecompute the same dynamics. We can formalize it by statingthe following property: Definition 6.
We say that two computational graphs have thesame structure if
1) their root nodes compute the same expression,2) any child subgraph of the root node of the first graph has the same structure with some child subgraph of theroot node of the second graph and vice versa.This definition, formulated through recursion, essentiallymeans that the order of nonlinearities which is hidden in twocomputational graphs should coincide, see Fig. 5. F ( · ) F ( · ) G ( · ) H ( · ) H ( · ) G ( · ) H ( · ) a a a a a Figure 5. Illustration of two computational graphs having the same structure.
Finally, a continuation of a nonlinear ODE system canbe performed if all the computational graphs computing thedynamics for all states ρ i have the same structure. Indeed,in this case it is possible to perform a continuation for anyset of similar subgraphs for each node as in the linear caseof (28)-(29). Moreover, by Definition 6 these sets of similarsubgraphs for different positions serve as inputs to the samenonlinearities, therefore a unique PDE with space-dependentcoefficients can be obtained. Remark . In theory, it is possible to satisfy Definition 6for any nonlinear system formulated through computationalgraphs. Indeed, assume two computational graphs have twodifferent root node expressions, denoted as F ( · ) and G ( · ) respectively. Then we can artificially create a new commonroot node which will compute 1 · F ( · ) + · G ( · ) for the firstgraph and 0 · F ( · ) + · G ( · ) for the second. Thus we cansatisfy the first condition of Definition 6, and recursivelyapplying this idea one can transform any pair of computationalgraphs into the pair which has the same structure. However,if the computational graphs of the system are too different indifferent points, it can make no sense to represent a system asa PDE, since it means that the dynamics of different parts ofthe system has nothing in common. C. Unequally-spaced systems
In the original derivation of (11) we assumed that thenodes are separated by constant ∆ x in space. This was donefor simplicity, and in general systems with nonuniform spac-ing can be tackled completely in the same way as space-dependent systems described in previous subsection. Indeed,assume the linear system is given by ˙ ρ i = ∑ N i j = a i j ρ i + s ij ,where x i + − x i (cid:54) = x j + − x j for i (cid:54) = j in general. Then forevery point the continuation can be performed by defining c ik = ∑ N i j = a i j ( x i + s ij − x i ) k . Representing obtained gains byfunctions c k ( x ) as it was described in previous subsection onefinishes with a PDE ∂ ρ∂ t = d ∑ k = c k ( x ) k ! ∂ k ρ∂ x k . (30) D. Boundary conditions
Now let us look at the Heat PDE: ∂ ρ∂ t = ∂ ρ∂ x . (31)Imagine that this equation is defined on an interval x ∈ [ , + ∞ ) , that is there is a boundary in the point x = a ∈ R ,1) Dirichlet
BC: ρ ( ) = a , Neumann
BC: ∂ ρ / ∂ x ( ) = a . (32)There can also exist a linear combination of these boundaryconditions, called Robin
BC.If the Heat Equation (31) is discretized in stencil points { i − , i , i + } , the result is˙ ρ i = ∆ x ( ρ i − − ρ i + ρ i + ) . (33)Assume now that there exists i = x i − = ρ can be obtained by the discretization of aboundary value problem (31)-(32) in two ways:1) Dirichlet BC: ˙ ρ = ( a − ρ + ρ ) / ∆ x ,
2) Neumann BC: ˙ ρ = ( ρ − ρ ) / ∆ x − a / ∆ x . (34)Now imagine the system (31) is obtained by the continua-tion process from the system (33). We can notice that statesof (33) are governed by the same dynamics except for theboundary state ρ . The question is how to recover the boundaryconditions (32) for the PDE from the dynamics of ρ in (34).This indeed can be done if one assumes that there existsa ”ghost cell” ρ such that it has no dynamics, but isalgebraically connected with adjacent states. With a properdefinition of ρ the equation for ˙ ρ can be represented inthe same way as for other states (33) and thus has the samecontinuation (31). For example, algebraic equations for ρ representing (33)-(34) are1) Dirichlet BC: ρ = a ,
2) Neumann BC: ρ = ρ − a ∆ x . (35)The ghost cell ρ = a for the case of Dirichlet BC is depicted inFig. 6. Notice that equations (35) can be directly continualized,obtaining (32). ρ ρ a x ∆ x ∆ x Figure 6. Boundary of the system (33) with Dirichlet boundary condition(34), represented by a ghost cell ρ = a . This procedure can be generalized to any ODE system:once the states near boundaries change their dynamics withrespect to the general governing equation, this change can berepresented by ”ghost cells” with algebraic dependences on the”real” states. Continualizing these algebraic equations leads tothe boundary conditions for the obtained PDE.
E. PDE with index derivatives
Usually PDEs have derivatives written with respect to thetime and space variables, thus their physical meaning is in thefunction continuously varying in time and space. However, ingeneral no one prevents us from writing a PDE with respectto some other variables.Assume a physical system is given by a set of interactingagents, with agents being indexed by an integer index i ∈ Z (a general multiindex space Z n can also be used). Let anagent i have a state ρ i . The index variable i is by definitiondiscrete. However we can make an assumption that in betweenof two agents with consecutive indexes i and i + ρ i to ρ i + . Denoting this continuously varying index by M ∈ R we can say that the state of the system ρ is a continuousand smooth function ρ ( M ) with the property ρ ( i ) = ρ i . Thisdefinition of M coincides with the definition of Moskowitzfunction used to describe the number of vehicles passedthrough a fixed point in traffic modeling [24].Once the index variable is continuous, we can think aboutit as a new space variable. Thus it is possible to use acontinuation described in previous sections, where the distancebetween two consecutive agents is obviously ∆ M =
1. Thederivatives of the state with respect to the index can beobtained by continuation, for example ρ i + − ρ i → ∂ ρ / ∂ M .V. D ERIVATION OF THE E ULER E QUATIONS
In the beginning of the XX century Hilbert posed his6th problem, where he suggested to develop a rigorous wayleading from the atomistic view to the laws of motion ofcontinua. In particular, the problem can be formulated as aderivation of the Euler equations for compressible fluids fromthe Newton’s dynamics of individual particles.For the most famous case of particles interacting throughcollision the Boltzmann equation was developed, describingevolution of the joint position-velocity probability distributionof particles. The method of how to transform individual’sdynamics into Boltzmann equation is based on the Boltzmann-Grad limit [25], assuming velocities of colliding particlesbeing independent. The following transformation from theBoltzmann equation to the Euler equations uses either Hilbertor Chapman-Erskog expansions with space contration limits[26], [27], Grad moments [28] or the method of invariantmanifolds [29].Another situation arises when the particles interact throughlong-range forces. In this case the Vlasov equation can beused instead of the Boltzmann equation to describe the jointposition-velocity probability distribution. The derivation ofthe Euler equations from the Vlasov equation was performedin [30] using space-contracting limit. In particular it wasshown that the resulting system has zero temperature, i.e. thevelocities of individual particles coincide with the velocityfield. However, due to the space contraction the particular formof the potential function was lost and the obtained pressure wasjust a square of the density.Here we present a derivation of Euler equations directlyfrom individual’s dynamics using the continuation method described in previous sections. Contrary to other works, we donot use any kind of limits and we use only one assumption onthe isotropy of the space. The assumption requires that for anyparticle its nearest neighbours are distributed around uniformlyin every direction, which can be seen as a counterpart to themolecular chaos hypothesis for the standard derivation of theBoltzmann equation.
A. System of particles
It is assumed that the fluid consists of small particlesinteracting with each other, with every particle followingsimple Newton laws. We will study the system with n spacedimensions, and the particles are assumed to have unit mass.We further assume that there is an interaction between eachpair of particles which is given by a force F ( x i − x j ) = x i − x j (cid:107) x i − x j (cid:107) f ( (cid:107) x i − x j (cid:107) ) = ( x i − x j ) φ ( (cid:107) x i − x j (cid:107) ) , (36)thus the force acts along the line connecting two particleswith the magnitude f depending only on the distance betweenparticles. For simplicity we also define a function φ ( s ) = f ( s ) / s representing the scaled magnitude. Since we assumethe infinite number of particles and an infinitely large space,the magnitude of the force should satisfy + ∞ (cid:90) ε s n − f ( s ) ds < ∞ ∀ ε > i ∈ Z n as written in section IV-A. Now let uswrite the dynamics of a particle with multiindex i using thesecond Newton’s Law: ˙ x i = v i , ˙ v i = ∑ q (cid:54) = F ( x i − x i + q ) , (38)where the summation is performed among all multiindices q in Z n \ { } , since all the particles interact with each other. Boththe position x i and the velocity v i are vectors in R n . B. Derivation in the Euclidean space
Treating the coordinate x i as a state and using the ideawritten in section IV-E we define a multiindex function M ( x , t ) which is the inverse function of the coordinate: M ( x i , t ) : = i .Likewise, x ( i , t ) = x i ( t ) and thus x ( M ( x , t ) , t ) ≡ x ∀ x ∈ R n .Now let us write a property of inverse function of mul-tiindex as M ( x ( M , t ) , t ) ≡ M ∀ M ∈ R n , where the space formultiindices is continuous by the assumption in section IV-E.Taking the time and the index derivatives, we obtain thefollowing very useful relations on Jacobians: ∂ M ∂ t + ∂ M ∂ x ∂ x ∂ t = , (39) ∂ M ∂ x ∂ x ∂ M = I . (40) Equation (39) can be seen as a PDE where the function M depends both on x and t . Recalling that the multiindexin assumed to be continuous, we can further utilize the firstequation of (38) written in a form ˙ x ( M , t ) = v ( M , t ) , substituteit in (39) and obtain the following equation on the multiindexevolution: ∂ M ∂ t = − ∂ M ∂ x v ( M ( x , t )) = − ∂ M ∂ x u ( x , t ) , (41)where the velocity function u ( x , t ) = v ( M ( x , t ) , t ) is defined asa velocity of a particle at some given point in space. Finally,taking the derivative with respect to space, we obtain ∂∂ t (cid:18) ∂ M ∂ x (cid:19) = − ∂∂ x (cid:18) ∂ M ∂ x u (cid:19) . (42)The Jacobian matrix ∂ M ∂ x ( x , t ) represents a compressiontensor , which measures how close are neighbour particles withrespect to different directions in the euclidean space. Evolutionof this Jacobian in the euclidean space is described by thematrix PDE (42), which is essentially a transport equation withflow velocity given by u ( x , t ) .Now we approach the second equation in (38). It would bedesirable to transform it in such a way that we could obtain anevolution equation for the flow velocity u ( x , t ) . First of all, letus rewrite the second equation of (38) in a way more suitablefor continuation, namely˙ v i = − ∑ q > ( F ( x i + q − x i ) − F ( x i − x i − q )) , (43)where the summation is performed among all multiinidiceswhich are greater than zero in lexicographical order, i.e. thefirst nonzero element of q should be positive.Now we use the continuation of order of accuracy 0 on amultidimensional system as described in IV-A such that x i + q − x i → ∂ x ∂ M (cid:0) x i + q / (cid:1) q , x i − x i − q → ∂ x ∂ M (cid:0) x i − q / (cid:1) q , which means that (43) becomes˙ v i = − ∑ q > (cid:32) F (cid:18) ∂ x ∂ M q (cid:19) i + q / − F (cid:18) ∂ x ∂ M q (cid:19) i − q / (cid:33) . Applying the continuation further to the forces, we obtain F i + q / − F i − q / → ∂ F ∂ M ( x i ) q . Thus (43) transforms into ∂ v ∂ t = − ∑ q > ∂∂ M (cid:18)(cid:20) ∂ x ∂ M q (cid:21) φ (cid:18)(cid:13)(cid:13)(cid:13)(cid:13) ∂ x ∂ M q (cid:13)(cid:13)(cid:13)(cid:13)(cid:19)(cid:19) q , (44)where we used a definition of the force (36).Now, we state the following result: Proposition 1.
For any q ∈ Z n and for any smooth scalar field φ ( x ) the following identity holds: (cid:20) ∂∂ M (cid:18) φ ∂ x ∂ M q (cid:19) q (cid:21) T = ∇ · (cid:32) φ ∂ x ∂ M qq T ∂ x ∂ M T (cid:33) − φ (cid:32) ∇ · (cid:18) ∂ x ∂ M (cid:19) qq T ∂ x ∂ M T (cid:33) , (45) where ∇ denotes a row vector of derivatives with respect to x.Proof. First, for convenience denote the left-hand side as avector Q : Q : = ∂∂ M (cid:18) φ ∂ x ∂ M q (cid:19) q = ∂∂ x (cid:18) φ ∂ x ∂ M q (cid:19) ∂ x ∂ M q . (46)Also define h = ( ∂ x / ∂ M ) q . Expanding ∂ ( φ h ) / ∂ x , we get Q = h ∂ φ∂ x h + ∂ h ∂ x h φ = hh T ∂ φ∂ x T + ∂ h ∂ x h φ . (47)Now, for any h ∈ R n ∇ · ( hh T ) = (cid:16) ∑ i h ∂ h i ∂ x i + ∑ i h i ∂ h ∂ x i · · · ∑ i h n ∂ h i ∂ x i + ∑ i h i ∂ h n ∂ x i (cid:17) , which means that (cid:0) ∇ · ( hh T ) (cid:1) T = ∂ h ∂ x h + ( ∇ · h ) h . (48)Therefore the transpose of (47) is Q T = ∂ φ∂ x hh T + φ ∇ · ( hh T ) − φ ( ∇ · h ) h T . (49)Since for any matrix J and for any scalar field α ∇ · ( α J ) = ∂ α∂ x J + α ∇ · J , (50)we can simplify (49) as Q T = ∇ · ( φ hh T ) − φ ( ∇ · h ) h T . (51)The result of the proposition follows by substituting h andnoticing that ∇ · (( ∂ x / ∂ M ) q ) = ( ∇ · ( ∂ x / ∂ M )) q .Proposition 1 allows us to rewrite (44) as being dependentonly on the euclidean space divergences and the inverse ofthe compression tensor ∂ M / ∂ x . To finalize the derivation of acomplete set of equations, recall the definition of the velocityfield u ( x , t ) = v ( M ( x , t ) , t ) . Taking the time derivative: ∂ u ∂ t = ∂ v ∂ t + ∂ v ∂ M ∂ M ∂ t , which by (41) is ∂ u ∂ t = − ∂ v ∂ M ∂ M ∂ x u + ∂ v ∂ t . This equation can be simplified by ∂ u / ∂ x = ∂ v / ∂ M · ∂ M / ∂ x .Finally, substituting (44) and (45) and combining the resultwith (42) we obtain a system ∂∂ t (cid:18) ∂ M ∂ x (cid:19) = − ∂∂ x (cid:18) ∂ M ∂ x u (cid:19) , ∂ u ∂ t = − ∂ u ∂ x u − ∑ q > (cid:34) ∇ · (cid:32) φ ∂ x ∂ M qq T ∂ x ∂ M T (cid:33) − φ (cid:32) ∇ · (cid:18) ∂ x ∂ M (cid:19) qq T ∂ x ∂ M T (cid:33) (cid:35) T , (52)where φ = φ ( (cid:107) ( ∂ x / ∂ M ) q (cid:107) ) .The system (52) has 12 states in 3-dimensional space, 9 for ∂ M / ∂ x ( x , t ) and 3 for u ( x , t ) . It resembles the famous Grad13-moment system [28], which extends the Euler equationsby considering directional-dependent pressure tensor. The last state of the Grad 13-moment system is the inner energy, whichdoes not appear in (52). The reason for this is that we derivea continuous interaction term explicitly from the interactionforces, which is possible only if the forces are defined bylong-range potentials. As it was shown in [30], expressinga system with long-range potentials by the Euler equationsleads to the solution with zero temperature, therefore the innerenergy becomes functionally dependent on the velocity fieldand its evolution equation can be omitted. C. Dimensionality reduction
It appears that in some special cases it is possible to reducethe system (52) by considering only one scalar characteristicof a compression in any space point instead of the wholecompression tensor.Indeed, we define a density as a determinant of the com-pression tensor, ρ ( x , t ) : = det ( ∂ M / ∂ x ) ( x , t ) . Not only thecompression tensor itself, but also its determinant satisfies(42). This nontrivial fact holds because the compression tensoris the Jacobian, and the proof is given in Lemma 1 in theAppendix. Therefore from (42) ∂ ρ∂ t = − ∇ · ( ρ u ) . (53)This equation is the first of the complete set of Euler equations.Unfortunately, the second equation of (52) depends on thewhole compression tensor and thus it cannot be describedonly by the means of density. This is reasonable since ingeneral the system can have different pressures in differentdirections in response to different compressions. Thereforein order to simplify the system we need to assume that thecompression can be represented by a single number, i.e. thatit is compressed equally in all directions. Assumption 1 ( Isotropy ) . Compression tensor ∂ M / ∂ x ( x , t ) isisotropic (equal in all directions), thus it can be representedas a rotation matrix multiplied by a scalar.This assumption looks restricting at first glance, but for theinfinitely large system with infinitely many particles the systemindeed ”looks the same” in all directions at every point, thuswe can say it is isotropic.Assumption 1 has long-lasting implications. Define l ( x , t ) : = λ ( ∂ x / ∂ M ( x , t )) , since all the eigenvalues are equal. This vari-able, called specific distance , represents an average distancebetween two neighbouring particles at point x . By definitionof the density ρ = l − n . Further, (cid:13)(cid:13)(cid:13) ∂ x ∂ M q (cid:13)(cid:13)(cid:13) = l (cid:107) q (cid:107) . Breakingthe summation in (52) in a sum of all possible lengths r ofmultiindex vectors, we can rewrite the summation term as ∑ r ∈ N (cid:34) ∇ · φ ( rl ) ∂ x ∂ M ∑ q > (cid:107) q (cid:107) = r (cid:0) qq T (cid:1) ∂ x ∂ M T −− φ ( rl ) ∇ · (cid:18) ∂ x ∂ M (cid:19) ∑ q > (cid:107) q (cid:107) = r (cid:0) qq T (cid:1) ∂ x ∂ M T (cid:35) T . (54) Proposition 2.
Given r such that r ∈ N , the summation overall outer products of multiindices of a length r is proportionalto the identity matrix, i.e. there exists β ( r ) such that ∑ q > (cid:107) q (cid:107) = r qq T = β ( r ) I . (55) Proof.
First of all, we will show that all nondiagonal elementsin (55) are zero. Indeed, for any positive q its contributionto k j -th element of matrix (55) is given by q k q j . But forany k (cid:54) = j we can pick ¯ q such that it equals q except¯ q max ( k , j ) = − q max ( k , j ) . In this case ¯ q is also positive and thusis included into the summation, while the contribution to k j -thelement of (55) has opposite sign. Therefore all nondiagonalelements of (55) are zero.Further, all diagonal elements of (55) are equal. This can beproven by analogous argument. Indeed, we can take a positive q and look at the elements q k and q j . Then ¯ q which is equalto q except for ¯ q k = sgn ( q k ) | q j | and ¯ q j = sgn ( q j ) | q k | is alsopositive, but swaps the contributions between k -th and j -thdiagonal elements. Thus all the contributions to the diagonalelements are equal. Finally,Tr ∑ q > (cid:107) q (cid:107) = r qq T = ∑ q > (cid:107) q (cid:107) = r q T q = r · r q = n β ( r ) , (56)where r q denotes the number of positive multiindices q withlength r and we define β ( r ) = r / n · r q . It is worth noticingthat by [31] the average approximate behaviour of the numberof positive multiindices q with length r is r q ∝ r n − as r → + ∞ , thus β ( r ) ∝ r n .By Assumption 1 ∂ x ∂ M ∂ x ∂ M T = l I . (57)Using Proposition 2 and (57), (54) becomes ∑ r ∈ N β ( r ) (cid:34) ∇ · (cid:0) φ ( rl ) l I (cid:1) − φ ( rl ) (cid:32) ∇ · (cid:18) ∂ x ∂ M (cid:19) ∂ x ∂ M T (cid:33) (cid:35) T . The value inside of the square brackets can be simplifiedfurther. Indeed, by (50) it is possible to inject density inside,which gives ρ (cid:34) ∇ · (cid:0) ρφ ( rl ) l I (cid:1) − ∂ρ∂ x φ ( rl ) l I − φ ( rl ) ρ (cid:32) ∇ · (cid:18) ∂ x ∂ M (cid:19) ∂ x ∂ M T (cid:33)(cid:35) T = ρ (cid:34) ∇ · (cid:0) ρφ ( rl ) l I (cid:1) − φ ( rl ) (cid:32) ∇ · (cid:18) ρ ∂ x ∂ M (cid:19) ∂ x ∂ M T (cid:33)(cid:35) T . Finally, the second term in the square brackets appearsto be zero, since Lemma 2 in the Appendix proves that ∇ · (cid:16) ρ ∂ x ∂ M (cid:17) ∂ x ∂ M T =
0. Using this Lemma and the fact that ∇ · ( ρφ ( rl ) l I ) = ∇ ( ρφ ( rl ) l ) , we can define the pressure : P = ∑ r ∈ N β ( r ) ρφ ( lr ) l = ∑ r ∈ N β ( r ) r l − n f ( lr ) . (58) Note that the pressure is well-defined since the sum is con-vergent by the property (37). With this definition, the system(52) together with (53) turns into the famous
Euler equations : ∂ ρ∂ t = − ∇ · ( ρ u ) , ∂ u ∂ t = − ∂ u ∂ x u − ∇ P ρ T . (59)Therefore the following theorem was proven: Theorem 3.
There exists a valid continuation process whichleads from the Newtonian system (38) to the Euler equations (59) under the assumption that the system is locally isotropicin every point in space.Remark . In the origi-nal ODE system (38) we assumed that an interaction existsbetween every pair of particles, i.e. that the topology ofinteractions is all-to-all. In general in order to obtain (38)it would be sufficient to use any topology for which theisotropy required in Assumption 1 is possible. The differencein topologies would modify the definitions of density P ( x , t ) in (58).For example, for the grid topology with equations given by ˙ x i = v i , ˙ v i = n ∑ k = (cid:0) F ( x i − x i − e k ) − F ( x i − x i + e k ) (cid:1) , (60)where e k denotes the k -th basis vector of R n , the continuationrenders the same Euler equations (59) with the pressure givenby P = f ( l ) / l n − .VI. C ONTROL OF R OBOTIC S WARM
In this section we will demonstrate how the continuationmethod described above can help in the analysis and designof control laws for large-scale systems. We will do it by usingan example of a robotic swarm, i.e. a formation of robotswhose goal is to follow some desired trajectory while passingthrough obstacles and preserving relative agents’ positions.Control of robotic formations is an extensively studied topic,see recent reviews [32], [33]. However most of the methodsrely on the graph-theoretic properties of interaction topologyand on simple linear controllers to provide stability. A PDEapproach was taken in [34] where the Euler PDE with diffusionterms was used to model the flocks of birds. The authorsproposed a PDE to describe the behaviour of agents andanalyzed it to study a symmetry breaking which leads to acoherent movement of birds. Similar PDE model was usedto control 3D agent formation with 2D disc communicationtopology via backstepping in [35]. Another interesting conceptwas used in [36], [37], namely the
Partial difference Equations (PdEs), which are an analogue of ordinary Partial DifferentialEquations defined on graphs. With the help of Lyapunovanalysis within PdE formalism it was proven that the lineardiffusion controller stabilizes the formation.Contrary to previous works we will base our analysis onthe continuation procedure, rigorously introducing a PDE todescribe a formation of drones. We will study this PDE and recover a nonlinear local control law which, being applied tothe agents, forces the whole formation to follow the desireddensity profile.
A. Continuation and PDE Control
Let us start from a system of drones having double integra-tor dynamics: ¨ x i = τ i . (61)Here x i ∈ R n is a position of the i -th drone in n -dimensionalspace and τ i ∈ R n is a control we want to design. Thedrones are enumerated with multiindices i ∈ Z n . Define v i = ˙ x i .Similarly to the previous section we introduce multiindexfunction M ( x , t ) such that M ( x i , t ) ≡ i and then perform acontinuation. The resulting system is ∂ ρ∂ t = − ∇ · ( ρ u ) , ∂ u ∂ t = − ∂ u ∂ x u + τ ( x , t ) , (62)where τ ( x , t ) = τ ( M ( x , t ) , t ) is a continuation of the control τ i .Now let us formulate the desired system which will be usedas a reference the real formation should converge to. Given avelocity profile u d ( x ) , we define the desired density ρ d ( x , t ) to follow this velocity profile. Essentially this means ”desiredagents” have single-integrator dynamics. Note that in general u d can be dependent on time but we don’t consider it forsimplicity of writing.Thus we assume the desired system is governed by ∂ ρ d ∂ t = − ∇ · ( ρ d u d ) . (63)Our goal is to derive τ ( x , t ) such that ρ → ρ d . First, directcalculations from (62) and (63) lead to the following systemsin terms of flows ( ρ u ) and ( ρ d u d ) : ∂ ( ρ u ) ∂ t = − ∇ · ( ρ u ) u − ρ ∂ u ∂ x u + ρτ ( x , t ) , ∂ ( ρ d u d ) ∂ t = − ∇ · ( ρ d u d ) u d . (64)Define the deviation from the desired density ˜ ρ = ρ − ρ d . Thenthe second-order equation for the deviation is ∂ ˜ ρ∂ t = ∇ · (cid:20) ∇ · ( ρ u ) u − ∇ · ( ρ d u d ) u d + ρ ∂ u ∂ x u − ρτ ( x , t ) (cid:21) . In order to cancel the nonlinear terms, define now the control τ as τ = ∂ u ∂ x u + ρ (cid:104) ∇ · ( ρ u ) u − ∇ · ( ρ d u d ) u d ++ α ( ρ d u d − ρ u ) + β ∇ ( ρ d − ρ ) T (cid:105) , (65)where α and β are some positive gains. Then the equation forthe density deviation transforms into ∂ ˜ ρ∂ t = − α ∂ ˜ ρ∂ t + β ∇ ˜ ρ . (66)This equation is a wave equation with damping and thus it isasymptotically stable if ˜ ρ = ρ d = ρ such that ρ = B. Discretization of the control
Formula (65) for PDE (62) is local by its nature, but itshould be discretized to be implemented on every agent of theoriginal ODE (61). One particular discretization is describednext.First of all, for the agent i define a matrix G i as a discretiza-tion of the compression tensor: [ G i ] j = ( x i + e j − x i − e j ) / ≈ ∂ x ∂ M j ( x i , t ) , (67)where e j is the j -th unit basis vector and [ G i ] j represent the j -th column of G i . The matrix G i depends on the positions of2 n neighbouring agents of the i -th agent. In the same way as G i we define a matrix W i representing a velocity Jacobian: [ W i ] j = ( v i + e j − v i − e j ) / ≈ ∂ u ∂ M j ( x i , t ) . (68)Now we can write formulas for all terms inside of (65)depending on the real system:1). ∂ u ∂ x u = ∂ u ∂ M ∂ M ∂ x u ≈ W i G − i v i , ∇ · u = n ∑ j = ∂ u j ∂ M ∂ M ∂ x j ≈ n ∑ j = [ W Ti ] j · [ G − i ] j , ρ ≈ / det G i , ∇ ρ ≈ − ρ ∇ ( det G i ) ≈ − ρ ∂ ( det G i ) ∂ M G − i , (69)where the gradient of the determinant det G i should be com-puted according to the determinant formula, using secondderivatives of the positions discretized similarly to (67): ∂ x ∂ M j ∂ M k ( x i , t ) ≈ ( x i + e j + e k + x i − e j − e k − x i + e j − e k − x i − e j + e k ) / , ∂ x ∂ M j ( x i , t ) ≈ x i + e j − x i + x i − e j . Since the gradient of the determinant depends on the secondderivatives, in total each agent requires information about thevelocities of its 2 n neighbouring agents and the positions ofits 2 n neighbouring agents, including diagonal ones.Finally, substituting (69) into (65), the formula for thecontrol action τ i appears as τ i = (cid:34) W i G − i + n ∑ j = [ W Ti ] j · [ G − i ] j − α (cid:35) v i ++ (cid:2) β I − v i v Ti (cid:3) G i G − Ti ∂ ( det G i ) ∂ M T ++ det G i (cid:104) αρ d u d + ( β I − u d u Td ) ∇ ρ Td − ρ d ( ∇ · u d ) u d (cid:105) . (70) C. Boundary conditions
For the system (66) to converge to zero proper boundaryconditions should be used. Namely, the continuation shouldbe chosen such that ρ = x i ± e j and velocities v i ± e j for the nonexisting agents.Assume agent i − e j is a ghost agent. One natural choice,which we will use for velocities, is to take v i − e j = v i − v i + e j .Being substituted in (68) this leads to an approximation of thevelocity gradient based solely on the i and i + e j agents. ρ xx i − e j x i x i + e j x i + e j s = l l l ρ i − e j = ρ i = l − ρ i + e j = ρ i + e j = l − Figure 7. Left boundary of the system (61) with control (70). Agent i is onthe boundary, the position of the ”ghost agent” i − e j is chosen such that ρ linearly goes to zero at x i − e j . This idea can’t be used for x i − e j since in this case thecompression tensor (67) will not ”feel” that the drone i ison the border. Instead we will use such an approximation thatthe density near the border will linearly diminish to zero, seeFig. 7. Namely, let us look at 1D case and fix i -th agent to beon the left border. Assume further that the distance betweeneach pair of existing agents is constant and equal to l . Then ρ i + e j = l − . Also we define ρ i − e j = s : = x i − x i − e j .Then asking for a linear dependency of a density on position,we have necessarily ρ i = l ρ i − e j + s ρ i + e j l + s = sl ( l + s ) . But by (67) ρ i = / ( l + s ) , which immediately gives the answer s = l , or x i − e j = x i + ( x i − x i + e j ) = x i − x i + e j . (71)This finalizes the formulation of the boundary conditions andthus the correct implementation of (70). D. Numerical Simulation
To demonstrate the control policy (70) we performed anumerical simulation of a cubic formation of 512 drones in 3Dspace. The goal was to reach a cubic formation, fly through awindow and restore the cubic formation after the maneuver.Assume the center of the window is placed at the point ( x , , ) , and the formation should fly through it starting fromthe origin. The desired velocity field u d ( x , y , z ) able to fulfillthe task was constructed as u d x = , u d y | z = .
05 atan ( x − x ) e − ( x − x ) y | z , where y | z denotes y or z , see the left panel of Fig. 8 forthe streamlines projected on the x - y plane. For simplicity thedesired system (63) was simulated by first-order integratorsfollowing the desired velocity profile, and the density ρ d ( x , t ) was interpolated between agents.Both the desired system (63) and the real system (61) weresimulated for the cubic formation of 8 × × Figure 8. Left: streamlines of the desired velocity field u d ( x , y ) . Right:convergence of the L norm of the density deviation. Euler method. The initial positions for the real system weremultiplied by 2 in comparison to the desired system and auniform noise U ( − , ) was added. The control gains werechosen as α = β = ONCLUSION
We presented a general process of transformation of ODEsystems into their PDE counterparts, defining the continuationto be valid if the original ODE system could be obtained fromthe PDE version by a correct discretization. We further showedthat the spectrum of PDE converges to the spectrum of ODE.The continuation method was then elaborated for many classesof systems including nonlinear, multidimensional and space-and time-varying. Based on this method, new continuousmodels can be derived and further utilized for analysis andcontrol purposes.As an example we used the continuation to show how theEuler equations for compressible fluid can be derived fromthe newtonian particle interactions, providing more intuitioninto Hilbert’s 6th problem. The same continuation was thenused to describe a robot formation flying through window. Wedeveloped a control algorithm to stabilize a desired trajectorybased on a continuous representation of the formation. Thisalgorithm is distributed as every robot requires informationonly about neighbouring robots.It would be desirable to study further the continuationmethod, namely to provide quantitative measures of how closeis a PDE solution compared to the original ODE one.A
PPENDIX
Lemma 1.
Let J ( x , t ) ∈ R n × n be the Jacobian matrix offunction M ( x , t ) . Let J ( x , t ) satisfies the dynamic equation ∂ J ∂ t = − ∂ ( Ju ) ∂ x , (72) where u = u ( x , t ) is some vector field. Then the determinant det J satisfies the same equation: ∂ det J ∂ t = − ∂∂ x · ( det J · u ) . (73) Figure 9. Simulation of drones flying through window. Rows correspond totimes t = { s , s , s , s , s , s , s } . Left column, reference: desired system(63), governed by single integrators. Right column, actual: heavility perturbedreal system (61) with control (70) which converges to the desired one. Proof.
First of all let us rewrite (72) for one element J ik ofthe matrix J : ∂ J ik ∂ t = − ∂ ( J i u ) ∂ x k = − n ∑ j = ∂ M i ∂ x k ∂ x j u j − n ∑ j = J i j ∂ u j ∂ x k == − n ∑ j = ∂ J ik ∂ x j u j − n ∑ j = J i j ∂ u j ∂ x k , (74)where we used the fact that J = ∂ M / ∂ x .Now let us recall the definition of the determinant: det J = ∑ σ sgn ( σ ) ∏ ni = J σ i , i , where σ is a permutation of the set { , ... n } and ∑ σ is taken over all possible permutations, with sgn ( σ ) being the sign of the permutation. Let us take the timederivative and then substitute (74): ∂ det J ∂ t = ∑ σ sgn ( σ ) n ∑ k = ∂ J σ k , k ∂ t n ∏ i = , i (cid:54) = k J σ i , i == − n ∑ j = ∑ σ sgn ( σ ) n ∑ k = (cid:20) ∂ J σ k , k ∂ x j u j + J σ k , j ∂ u j ∂ x k (cid:21) n ∏ i = , i (cid:54) = k J σ i , i (75)We will investigate two parts of (75), corresponding to thefirst and the second terms inside the square brackets. For thefirst part we have − n ∑ j = ∑ σ sgn ( σ ) n ∑ k = ∂ J σ k , k ∂ x j u j n ∏ i = , i (cid:54) = k J σ i , i == − n ∑ j = ∂ det J ∂ x j u j = − ∂ det J ∂ x u . (76)The second part is a little bit more tricky: − n ∑ j = ∑ σ sgn ( σ ) n ∑ k = J σ k , j ∂ u j ∂ x k n ∏ i = , i (cid:54) = k J σ i , i = − det J n ∑ j = ∂ u j ∂ x j −− n ∑ j = ∑ σ sgn ( σ ) n ∑ k = , k (cid:54) = j J σ k , j ∂ u j ∂ x k n ∏ i = , i (cid:54) = k J σ i , i . Here we split the summation over k into the term with k = j and all other terms. The first one immediately gives thedeterminant multiplied by the divergence of the vector field.It appears that the sum over all other terms is zero. Indeed,imagine a permutation ¯ σ such that it is equal to σ except σ j and σ k are swapped. Then the sign of ¯ σ is opposite to thesign of σ . Further, since the product J σ k , j J σ j , j is the only wayin which σ k and σ j enter the formula, the absolute value doesnot change with the change of permutation. Therefore for each j , k and for each permutation there exists a permutation whichcancels them out.Finally, substitution of the nonzero term of the last equationand (76) into (75) leads to (73). Lemma 2.
Let ∂ x / ∂ M be isotropic, i.e. representedby a scalar multiplied by a rotation matrix, and let ρ = det ( ∂ M / ∂ x ) . Then ∇ · (cid:18) ρ ∂ x ∂ M (cid:19) ∂ x ∂ M T = . (77) Proof.
Define λ = λ ( ∂ M / ∂ x ) , thus ρ = λ n . By isotropy ∂ x ∂ M = λ − ∂ M ∂ x T , (78)therefore the left-hand side of (77) is ∇ · (cid:32) λ n − ∂ M ∂ x T (cid:33) ∂ M ∂ x λ − == λ n − ∇ · (cid:32) ∂ M ∂ x T (cid:33) ∂ M ∂ x + ( n − ) λ n − ∂ λ∂ x , (79)where we used (50) and (78). Now let us investigate the firstterm more closely. Taking the divergence and looking at j -thelement, we see that (cid:34) ∇ · (cid:32) ∂ M ∂ x T (cid:33) ∂ M ∂ x (cid:35) j = n ∑ k = ∂ M ∂ x k T ∂ M ∂ x j . (80) Now, By isotropy ∂ M ∂ x j T ∂ M ∂ x k = ∀ j (cid:54) = k , ∂ M ∂ x k T ∂ M ∂ x k = λ . (81)Taking the derivative of the multiplication of basis vectors: ∂∂ x j (cid:32) ∂ M ∂ x k T ∂ M ∂ x k (cid:33) = ∂ M ∂ x j ∂ x k T ∂ M ∂ x k , (82)but at the same time the value under the derivative is λ by(81), therefore ∂∂ x j (cid:32) ∂ M ∂ x k T ∂ M ∂ x k (cid:33) = ∂ λ ∂ x j = λ ∂ λ∂ x j . (83)Then, taking the derivative of multiplication of different basisvectors with j (cid:54) = k , by (81) we obtain zero: ∂∂ x k (cid:32) ∂ M ∂ x j T ∂ M ∂ x k (cid:33) = ∂ M ∂ x j ∂ x k T ∂ M ∂ x k + ∂ M ∂ x j T ∂ M ∂ x k = , which by equality of (82) and (83) means that for j (cid:54) = k ∂ M ∂ x j T ∂ M ∂ x k = − ∂ M ∂ x j ∂ x k T ∂ M ∂ x k = − λ ∂ λ∂ x j . (84)In the case of j = k by equality of (82) and (83) we have ∂ M ∂ x j T ∂ M ∂ x j = λ ∂ λ∂ x j . (85)Combination of (84) and (85) means that (80) is (cid:34) ∇ · (cid:32) ∂ M ∂ x T (cid:33) ∂ M ∂ x (cid:35) j = ( − n ) λ ∂ λ∂ x j . (86)Finally, substituting (86) in (79) gives zero.A CKNOWLEDGMENT
The Scale-FreeBack project has received funding fromthe European Research Council (ERC) under the EuropeanUnion’s Horizon 2020 research and innovation programme(grant agreement N 694209).R
EFERENCES[1] L. Euler. Principia motus fluidorum. Novi commentarii academiaescientiarum Petropolitanae, 271-311. 1761.[2] J. C. Maxwell. A treatise on electricity and magnetism (Vol. 1).Clarendon press. 1873.[3] J. G. Verwer and J. M. Sanz-Serna. Convergence of method of linesapproximations to partial differential equations. Computing, 33(3-4),297-313. 1984.[4] W. F. Ames. Numerical methods for partial differential equations.Academic press. 2014.[5] M. Aoki. Control of large-scale dynamic systems by aggregation. IEEETransactions on Automatic Control, vol. 13, no. 3, pp. 246–253, 1968.[6] M. U. B. Niazi, X. Cheng, C. Canudas-de-Wit and J. Scherpen.Structure-based Clustering Algorithm for Model Reduction of Large-scale Network Systems. CDC 2019 - 58th IEEE Conference on Decisionand Control, Nice, France, Dec 2019.[7] M. U. B. Niazi, D. Deplano, C. Canudas-de-Wit, and A. Y. Kibangou.Scale-free estimation of the average state in large-scale systems. IEEEControl Systems Letters, vol. 4, no. 1, pp. 211–216, Jan 2020.[8] D. Nikitin, C. Canudas-de-Wit and P. Frasca. Control of Averageand Deviation in Large-Scale Linear Networks. Submitted to IEEETransactions on Automatic Control. [9] J. A. Acebron, L. L. Bonilla, C. J. P. Vicente, F. Ritort and R.Spigler. The Kuramoto model: A simple paradigm for synchronizationphenomena. Reviews of modern physics, 77(1), 137. 2005.[10] D. Q. Nykamp and D. Tranchina. A population density approach thatfacilitates large-scale modeling of neural networks: Analysis and anapplication to orientation tuning. Journal of computational neuroscience,8(1), 19-50. 2000.[11] H. Grabert. Projection operator techniques in nonequilibrium statisticalmechanics. Springer Tracts in Modern Physics, vol. 95, Springer-Verlag,Berlin-New York, 1982.[12] Y. Yang, D. V. Dimarogonas and X. Hu. Shaping up crowd of agentsthrough controlling their statistical moments. 2015 European ControlConference (ECC), pp. 1017-1022, Linz, 2015.[13] S. Zhang, A. Ringh, X. Hu and J. Karlsson. Modeling collectivebehaviors: A moment-based approach. IEEE Transactions on AutomaticControl, 2020.[14] C. Kuehn. Moment closure—a brief review. In Control of self-organizingnonlinear systems (pp. 253-271). Springer, Cham. 2016.[15] D. Nikitin, C. Canudas-de-Wit, P. Frasca. Shape-based nonlinear modelreduction for 1D conservation laws. 21st IFAC World Congress 2020,Berlin, Germany, July 11-17, 2020.[16] D. Glasscock. a Graphon?. Notices of the AMS, 62(1). 2015.[17] L. Lov´asz. Large networks and graph limits (Vol. 60). American Math-ematical Soc.. 2012.[18] S. Gao and P. E. Caines. Graphon Control of Large-scale Networks ofLinear Systems. in IEEE Transactions on Automatic Control, 2019.[19] R. Vizuete, P. Frasca and F. Garin. Graphon-based sensitivity analysisof SIS epidemics. IEEE Control Systems Letters, IEEE, 2020, 4 (3),pp.542 - 547.[20] A. G. Baydin, B. A. Pearlmutter, A. A. Radul and J. M. Siskind.Automatic differentiation in machine learning: a survey. The Journalof Machine Learning Research, 18(1), 5595-5637. 2017.[21] B. Bamieh, F. Paganini and M. A. Dahleh. Distributed control ofspatially invariant systems. IEEE Transactions on automatic control,47(7), 1091-1107. 2002.[22] A. E. Frazho and W. Bhosri. Toeplitz and Laurent Operators. In:An Operator Perspective on Signals and Systems. Operator Theory:Advances and Applications (Linear Operators and Linear Systems), vol204. Birkh¨auser Basel. 2010.[23] A. N. Kolmogorov. On the representation of continuous functionsof several variables by superposition of continuous functions of onevariable and addition. Dokl. Akad. Nauk SSSR 114, 953-956. 1957.[24] G. F. Newell. A simplified theory of kinematic waves in highway traffic,part I: General theory. Transportation Research Part B: Methodological,27(4), 281-287. 1993.[25] I. Gallagher, L. Saint-Raymond and B. Texier. From Newton to Boltz-mann: hard spheres and short-range potentials. European MathematicalSociety. 2013.[26] L. Saint-Raymond. Hydrodynamic limits of the Boltzmann equation.Lecture Notes in Mathematics, vol. 1971. Springer-Verlag, 2009.[27] S. Chapman and T. G. Cowling. The Mathematical Theory of Non-Uniform Gases. Cambridge University Press, Cambridge, 1970.[28] H. Grad. On the kinetic theory of rarefied gases. Commun. Pure Appl.Math. 2, 325, 1949.[29] A. N. Gorban and I. Karlin. Hilbert’s 6th Problem: exact and approxi-mate hydrodynamic manifolds for kinetic equations. Bull. Amer. Math.Soc. 51 (2): 186–246, 2014.[30] S. Caprino, R. Esposito, R. Marra and M. Pulvirenti. Hydrodynamiclimits of the Vlasov equation. Communications in partial differentialequations, 18(5-6), 805-820. 1993.[31] R. Takloo-Bighash. How many lattice points are there on a circle or asphere?. In: A Pythagorean Introduction to Number Theory. Undergrad-uate Texts in Mathematics. Springer, Cham. 2018.[32] K. K. Oh, M. C. Park and H. S. Ahn. A survey of multi-agent formationcontrol. Automatica, 53, 424-440. 2015.[33] S. Chung, A. A. Paranjape, P. Dames, S. Shen and V. Kumar. A Surveyon Aerial Swarm Robotics. IEEE Transactions on Robotics, vol. 34, no.4, pp. 837-855, Aug. 2018.[34] J. Toner and Y. Tu, Long-range order in a two-dimensional dynamicalXY model: How birds fly together. Phys. Rev. Lett., vol. 75, no. 3, pp.4326–4329, 1995.[35] J. Qi, R. Vazquez and M. Krstic. Multi-agent deployment in 3-D viaPDE control. IEEE Transactions on Automatic Control, 60(4), 891-906.2014.[36] G. Ferrari-Trecate, A. Buffa, and M. Gati. Analysis of coordination inmulti-agent systems through partial difference equations. IEEE Trans.Autom. Control, vol. 51, no. 6, pp. 1058–1063, Jun. 2006. [37] M. Ji, G. Ferrari-Trecate, M. Egerstedt, and A. Buffa. Containmentcontrol in mobile networks. IEEE Trans. Autom. Control, vol. 53, no.8, pp. 1972–1975, Sep. 2008.[38] G. B. Folland. Introduction to partial differential equations (Vol. 102).Princeton university press. 1995. Denis Nikitin received both the B.Sc. and theM.Sc. degree in Mathematics and Mechanics Facultyof Saint Petersburg State University, Saint Peters-burg, Russia, specializing on the control theory andcybernetics. He won several international roboticscompetitions while being student and was a teacherof robotics in the Math&Phys Lyceum 239 in SaintPetersburg.He is currently a doctoral researcher at CNRS,GIPSA-Lab, Grenoble, France. His current researchmainly focuses on control of large-scale systems.
Carlos Canudas-de-Wit (F’16) was born in Vil-lahermosa, Mexico, in 1958. He received the B.S.degree in electronics and communications from theMonterrey Institute of Technology and Higher Ed-ucation, Monterrey, Mexico, in 1980, and the M.S.and Ph.D. degrees in automatic control from the De-partment of Automatic Control, Grenoble Instituteof Technology, Grenoble, France, in 1984 and 1987,respectively. He is currently a Directeur de recherche(Senior Researcher) with CNRS, Grenoble, wherehe is the Leader of the NeCS Team, a joint team ofGIPSA-Lab (CNRS) and INRIA, on networked controlled systems.Dr. Canudas-de-Wit is an IFAC Fellow. He was an Associate Editor ofthe IEEE Transactions on Automatic Control, the Automatica, the IEEETransactions on Control Systems Technology. He is an Associate Editor of theAsian Journal of Control, and the IEEE Transactions on Control of NetworkSystems. He served as the President of the European Control Associationfrom 2013 to 2015, and a member of the IEEE Board of Governors of theControl System Society from 2011 to 2014. He holds the ERC AdvancedGrant Scale-FreeBack from 2016 to 2021.