Space mapping-based optimization with the macroscopic limit of interacting particle systems
aa r X i v : . [ m a t h . O C ] F e b Space mapping-based optimization with the macroscopic limitof interacting particle systems
Jennifer Weißen ∗ , Simone G¨ottlich ∗ and Claudia Totzeck ∗ February 23, 2021
Abstract
We propose a space mapping-based optimization algorithm for microscopic interactingparticle dynamics which are inappropriate for direct optimization. This is of relevance forexample in applications with bounded domains such that the microscopic optimization isdifficult. The space mapping algorithm exploits the relationship of the microscopic de-scription of the interacting particle system and the corresponding macroscopic descriptionas partial differential equation in the “many particle limit”. We validate the approachwith the help of a toy problem that allows for direct optimization. Then we study theperformance of the algorithm in two applications. An evacuation dynamic is consideredand the transportation of goods on a conveyor belt is optimized. The numerical resultsunderline the feasibility of the proposed algorithm.
Key words: model hierarchy; optimization; space mapping; interacting particle systems
Subject Classification:
In the recent decades interacting particle systems attracted a lot of attention from researchersof various fields such as swarming, pedestrian dynamics and opinion formation (cf. [1, 25, 31,32] and the references therein). In particular, a model hierarchy was established [12, 20].The main idea of the hierarchy is to model the same dynamics with different accuracies,each having its own advantages and disadvantages. The model with the highest accuracy isthe microscopic one. It describes the positions and velocities of each particle explicitly. Forapplications with many particles involved this microscopic modelling leads to a huge amountof computational effort and storage needed. Especially, when it comes to the optimization ofproblems with many particles [10, 11].There is also an intermediate level of accuracy given by the mesoscopic description, see[1, 12, 32]. We do not want to give its details here, instead, we directly pass to the macro-scopic level, where the velocities are averaged and a position-dependent density describes theprobability of finding a particle of the dynamics at given position. Of course, we loose theexplicit information of each particle, but have the advantage of saving a lot of storage in thesimulation of the dynamics. Despite the lower accuracy many studies [1, 11, 29] indicate thatthe evolution of the density yields a good approximation of the original particle system, seealso [35], which proposed a limiting procedure that is considered in more detail below. University of Mannheim, Department of Mathematics, 68131 Mannheim, Germany( { jennifer.weissen,goettlich,totzeck } @uni-mannheim.de) We begin with the general framework and propose the space mapping technique to approxi-mate an optimal solution of the interacting particle system. In general, the interacting particledynamic for N ∈ N particles in the microscopic setting is given by the ODE system dx i dt = v i ,m dv i dt = G ( x i , v i ) + A X j = i F ( x i − x j ) ,x i (0) = x i , v i (0) = v i , i = 1 , . . . N (1)where x i ∈ R , v i ∈ R are the position and the velocity of particle i supplemented with initialcondition x i (0) = x i , v i (0) = v i for i = 1 , . . . , N . Here, F denotes an interaction kernel whichis often given as a gradient of a potential [15]. For notational convenience, we define the statevector y = ( x i , v i ) i =1 ,...,N which contains the position and velocity information of all particles. Remark 1.
Note that there are models that include boundary dynamics with the help of softcore interactions, see for example [25]. In general, these models allow for direct optimization.Nevertheless, for N ≫ the curse of dimensions applies and the approach discussed here maystill be useful. Sending N → ∞ and averaging the velocity, we formally obtain a macroscopic approxi-mation of the ODE dynamics given by the PDE ∂ t ρ + ∇ · ( ρ ¯ v ( x ) − k ( ρ ) ∇ ρ ) = 0 , ( x, t ) ∈ Ω × [0 , T ] ρ ( x,
0) = ρ ( x ) , x ∈ Ω (2)where ρ = ρ ( x, t ) denotes the particle density in the domain Ω ⊆ R . The velocity ¯ v is theaveraged velocity depending on the position and k ( ρ ) describes the diffusion.We consider constrained optimization problems of the formmin u ∈U ad J ( u, y )subject to E ( u, y ) = 0 , J is the cost functional, U ad is the set of admissible controls and y are the state variableswith E ( u, y ) = 0. In the following, for a given control u ∈ U ad , the constraint E ( u, y ) containsthe modeling equations for systems of ODEs or PDEs. With the additional assumption thatfor a given control u , the model equations have a unique solution, we can express y = y ( u )and consider the reduced problem min u ∈U ad J ( u, y ( u )) . (3)This is a nonlinear optimization problem, which we intend to solve for an ODE constraint E ( u, y ( u )). To do this, one might follow a standard approach [26] and apply a gradientdescent method based on adjoints [34] to solve the microscopic reduced problem iteratively.In contrast, the space mapping technique employs a cheaper, substitute model (coarse model)for the optimization of the fine model optimization problem. Under the assumption that theoptimization of the microscopic system is difficult and the optimization of the macroscopicsystem can be computed efficiently, we propose space mapping-based optimization. The mainobjective is to iteratively approximate an optimal control for the microscopic dynamics. Toget there, we solve a related optimal control problem on the macroscopic level in each iteration. Space mapping was originally introduced in the context of electromagnetic optimization [6].The original formulation has been subject to improvements and changes [8] and enhancedby classical methods for nonlinear optimization. The use of Broyden’s method to constructa linear approximation of the space mapping function, so-called aggressive space mapping(ASM) was introduced by Bandler et al. [7]. We refer to [4, 8] for an overview of spacemapping methods.More recently, space mapping has been successfully used in PDE based optimization prob-lems. Banda and Herty [5] presented an approach for dynamic compressor optimization ingas networks. G¨ottlich and Teuber [24] use space mapping based optimization to control theinflow in transmission lines. In both cases, the fine model is given by hyperbolic PDEs onnetworks and the main difficulty arises from the nonlinear dynamics induced by the PDE.These dynamics limit the possibility to efficiently solve the optimization problems. In theirmodel hierarchy, a simpler PDE serves as the coarse model and computational results demon-strate that such a space mapping approach enables to efficiently compute accurate results.Pinnau and Totzeck [33] used space mapping for the optimization of a stochastic interactingparticle system. In their approach the deterministic state model was used as coarse modeland lead to satisfying results. Here, we employ a mixed hyperbolic-parabolic PDE as thecoarse model in the space mapping technique to solve a control problem on the ODE level.Our optimization approach therefore combines different hierarchy levels. As discussed, thedifficulty on the ODE level can arise due to boundaries in the underlying spatial domain ordue to a large number of interacting particles. In contrast, the macroscopic equation natu-rally involves boundary conditions and its computational effort is independent of the particlenumber.The outline of the paper is as follows: We introduce the space mapping technique insection 2 together with the fine and coarse model description in the subsections 2.1 and 2.2.Particular attention is payed to the solution approach for the discretized coarse model insection 2.2.2, which is an essential step in the space mapping algorithm. The discretized fine3odel optimal control problem is presented in section 3 and the space mapping approach isvalidated by comparisons to a standard optimization technique for the fine model. We providenumerical examples in bounded domains in section 4. Various controls such as the source ofan eikonal field in evacuation dynamics, cf. section 4.1, and the conveyor belt velocity in amaterial flow setting, cf. section 4.2, demonstrate the diversity of the proposed space mappingapproach. In the conclusion in section 5 our insights are summarized.
Space mapping considers a model hierarchy consisting of a coarse and a fine model. Let G c : U cad → R n c , G f : U fad → R n f denote the operators mapping a given control u to a specifiedobservable G c ( u ) in the coarse and G f ( u ) in the fine model, respectively. The idea of spacemapping is to find the optimal control u f ∗ ∈ U fad of the complicated (fine) model controlproblem with the help of a coarse model, that is simple to optimize.We assume that the optimal control of the fine model u f ∗ = argmin u ∈U fad kG f ( u ) − ω ∗ k , where ω ∈ R n is a given target state, is inappropriate for optimization. In contrast, we assumethe optimal control u c ∗ ∈ U cad of the coarse model control problem u c ∗ = argmin u ∈U cad kG c ( u ) − ω ∗ k , can be obtained with standard optimization techniques. While it is computationally cheaperto solve the coarse model, it helps to acquire information about the optimal control variablesof the fine model. By exploiting the relationship of the models, space mapping combinesthe simplicity of the coarse model and the accuracy of the more detailed, fine model veryefficiently [3, 17]. Definition 2.1.
The space mapping function T : U fad → U cad is defined by T ( u f ) = argmin u ∈U cad kG c ( u ) − G f ( u f ) k . The process of determining T ( u f ), the solution to the minimization problem in Defini-tion 2.1, is called parameter extraction. It requires a single evaluation of the fine model G f ( u f ) and a minimization in the coarse model to obtain T ( u f ) ∈ U cad . Uniqueness of thesolution to the optimization problem is desirable but in general not ensured since it stronglydepends on the two models and the admissible sets of controls U fad , U cad , see [17] for moredetails.The basic idea of space mapping is that either the target state is reachable, i.e., G f ( u f ∗ ) ≈ ω ∗ or both models are relatively similar in the neighborhood of their optima, i.e., G f ( u f ∗ ) ≈G c ( u c ∗ ). Then we have T ( u f ∗ ) = argmin u ∈U cad kG c ( u ) − G f ( u f ∗ ) k ≈ argmin u ∈U cad kG c ( u ) − ω ∗ k = u c ∗ , T , we thereforeonly use evaluations. In fact, the space mapping algorithms allows us to shift most of themodel evaluations in an optimization process to the faster, coarse model. In particular,no gradient information of the fine model is needed to approximate the optimal fine modelcontrol [3]. Figure 1 illustrates the main steps of the space mapping algorithm.Figure 1: Schematic representation of a space mapping algorithm.In the literature, many variants of the space mapping idea can be found [8]. We will usethe ASM algorithm, see algorithm 1 in Appendix A or the references [7, 24] for algorithmicdetails. Starting from the iterate u = u c ∗ , the descent direction d k is updated in each iteration k using the space mapping evaluation T ( u k ). The algorithm terminates when the parameterextraction maps the current iterate u k (approximately) to the coarse model optimum u c ∗ , suchthat kT ( u k ) − u c ∗ k is smaller than a given tolerance in an appropriate norm k·k . The solutions u c ∗ and T ( u k ) are computed using adjoints here and will be explained in section 2.2.2. We seek to control a general microscopic model for the movement of N particles with dynamicsgiven by (1). We choose the velocity selection mechanism G ( x, v ) = − ( v − v ( x )) τ , which describes the correction of the particle velocities towards an equilibrium velocity v ( x )with relaxation time τ . Such systems describe the movements of biological ensembles such asschool of fish, flocks of birds [2, 13, 16], ant [9] or bacterial colonies [28] as well as pedestriancrowds [23, 25] and transport of material [21, 22]. In general, the force F occuring in (1) is apairwise interaction force between particle i and particle j . We choose to activate it whenevertwo particles overlap and therefore k x i − x j k < R . For k x i − x j k ≥ R , the interaction5orce is assumed to be zero. In the following we restrict ourselves to forces described by F ( x i − x j ) = ( b F ( k x i − x j k − R ) x i − x j k x i − x j k if k x i − x j k ≤ R, b F > E ( u, y ) = 0 if and only if the microscopicmodel equations (1) are satisfied to investigate various controls u. For example, u being thelocal equilibrium velocity v ( x ) of the velocity selection mechanism or u being the factor A scaling the interaction force between the particles. The objective function under considerationin each of the scenarios is the squared deviation of the performance evaluation j ( u, y ( u )) fromthe target value ω ∗ ∈ R , that is J ( u, y ( u )) = 12 ( j ( u, y ( u )) − ω ∗ ) . (5)In the following we discuss the macroscopic approximation which is used as coarse model forthe space mapping. Reference [35] shows that in the many particle limit, N → ∞ , the microscopic system (1) canbe approximated by the advection-diffusion equation (2) with k ( ρ ) = CρH ( ρ − ρ crit ). Theconstant C = ACτ , derived from the microscopic interaction force, is defined through therelation lim R → Z B R (0) F ( z ) h∇ ρ ( x ) , z i dz = C ∇ ρ ( x ) , where C < ∞ . The density ρ crit = 1 is a density threshold, above which diffusion in the macroscopicmodel is activated. H denotes the Heaviside function H ( x ) = ( x < , . At the boundary, we apply zero-flux boundary conditions for the advective and the diffu-sive flux ( vρ ) · ~n = 0 , x ∈ ∂ Ω , ( k ( ρ ) ∇ ρ ) · ~n = 0 , x ∈ ∂ Ω , (6)where ~n = ( n (1) , n (2) ) T is the outer normal vector at the boundary ∂ Ω.The advection-diffusion equation (2) serves as the coarse model in the space mappingtechnique. To solve optimization problems in the coarse model, we pursue a first-discretize-then-optimize approach. In the following, we discretize the macroscopic model and derive thefirst order optimality system for the discretized macroscopic system.
Remark 2.
We recommend to choose the optimization approach depending on the structureof the macroscopic equation. Here, the PDE is hyperbolic whenever no particles overlap, wetherefore choose first-discretize-then-optimize. If the macroscopic equation would be purelydiffusive, one might employ a first-optimize-then-discretize approach instead. .2.1 Discretization of the macroscopic model We discretize a rectangular spatial domain (Ω ∪ ∂ Ω) ⊂ R with grid points x ij = ( i ∆ x (1) , j ∆ x (2) ),( i, j ) ∈ I Ω = { , . . . N x (1) } × { , . . . N x (2) } . The boundary ∂ Ω is described with the set of in-dices I ∂ Ω ⊂ I Ω . The time discretization of the coarse model is ∆ t c and the grid constantsare λ (1) = ∆ t c / ∆ x (1) and λ (2) = ∆ t c / ∆ x (2) . We compute the approximate solution to theadvection-diffusion equation (2) as follows ρ ( x, t ) = ρ sij for ( x ∈ C ij ,t ∈ [ t s , t s +1 ) , where C ij = (cid:20) ( i −
12 )∆ x (1) , ( i + 12 )∆ x (1) (cid:19) × (cid:20) ( j −
12 )∆ x (2) , ( j + 12 )∆ x (2) (cid:19) ,t s = s ∆ t c for s = 1 , . . . , N ct . The discretization of the initial density in (2) is obtained from the microscopic initialpositions smoothed with a Gaussian filter ηη ( x ) = 12 π e − k x k , such that the initial density reads ρ = η ∗ X i πR ∆ x (1) ∆ x (2) ( x i ∈C ij ) ! ( i,j ) ∈I Ω . (7)To compute ρ sij , s >
0, we solve the advection part with the Upwind scheme and applydimensional splitting. The diffusion part is solved implicitly˜ ρ sij = ρ sij − ∆ t c ∆ x (1) (cid:16) F (1) ,s, + ij − F (1) ,s, − ij (cid:17) ,ρ sij = ˜ ρ sij − ∆ t c ∆ x (2) (cid:16) F (2) ,s, + ij − F (2) ,s, − ij (cid:17) ,ρ s +1 ij = ρ sij + ∆ t c ∆ x (1) ∆ x (2) B s +1 ij , (8)where the following short notation is used F (1) ,s, + ij = F (1) ( ρ sij , ρ si +1 j ) , F (1) ,s, − ij = F (1) ( ρ si − j , ρ sij ) , F (2) ,s, + ij = F (2) (˜ ρ sij , ˜ ρ sij +1 ) , F (2) ,s, − ij = F (2) (˜ ρ sij − , ˜ ρ sij ) ,B s +1 ij = B (cid:16) ρ s +1 i − j , ρ s +1 i +1 j , ρ s +1 ij , ρ s +1 ij − , ρ s +1 ij +1 (cid:17) . F (1) , F (2) and B are given by F (1) ( ρ sij , ρ si +1 j ) = ρ sij v (1) ij if v (1) ij ≥ , ( i + 1 , j ) ∈ I Ω \ I ∂ Ω ,ρ si +1 j v (1) ij if v (1) ij < , ( i, j ) ∈ I Ω \ I ∂ Ω , F (2) (˜ ρ sij , ˜ ρ sij +1 ) = ˜ ρ sij v (2) ij if v (2) ij ≥ , ( i, j + 1) ∈ I Ω \ I ∂ Ω , ˜ ρ sij +1 v (2) ij if v (2) ij < , ( i, j ) ∈ I Ω \ I ∂ Ω , B ( ρ s +1 i − j , ρ s +1 i +1 j ,ρ s +1 ij , ρ s +1 ij − , ρ s +1 ij +1 ) = b s +1 i − j + b s +1 i +1 j − b s +1 ij + b s +1 ij − + b s +1 ij +1 , where v ( x ij ) = v ij , v ij = 0 ∀ ( i, j ) ∈ I ∂ Ω and b s +1 ij = b ( ρ s +1 ij ) with b ( ρ ) = R ρ CzH ( z − ρ crit ) dz .The Heaviside function H is smoothly approximated and the time step restriction for thenumerical simulations is given by the CFL condition of the hyperbolic part∆ t c ≤ min ( i,j ) | v (1) ij | ∆ x (1) + | v (2) ij | ∆ x (2) , compare [27, 35]. We denote the vector of density values ρ = ( ρ sij ) ( i,j,s ) ∈I Ω ×{ ,...N ct } . It isthe discretized solution (8) of the macroscopic equation (2) which depends on a given control u . The vectors containing intermediate density values ˜ ρ , ρ and Lagrange parameters µ , ˜ µ , µ used below are defined analogously. Next, we turn to the solution of the coarse-scale optimization problem. The constructionof a solution to this problem is paramount to the space mapping algorithm. We provide ashort discussion on the adjoint method for the optimization problem (3) before we specify themacroscopic adjoints.
First Order Optimality System
Let J ( u, y ( u )) be an objective function which dependson the given control u . We wish to solve the optimization problem (3) and apply a descentalgorithm. In a descent algorithm, a current iterate u k , is updated in the direction of descentof the objective function J until the first order optimality condition is satisfied. An efficientway to compute the first order optimality conditions is based on the adjoint, which we recallin the following. Let the Lagrangian function be defined as L ( u, y ( u )) = J ( u, y ( u )) + µ T E ( u, y ( u )) , where µ is called the Lagrange multiplier.Solving dL = 0 yields the first order optimality system(i) E ( u, y ( u )) = 0,(ii) ( ∂ y E ( u, y ( u )) T ) µ = − ( ∂ y J ( u, y ( u )) T ,(iii) ddu J ( u, y ( u )) = ∂ u J ( u, y ( u )) + µ∂ u E ( u, y ( u )) = 0 . ddu J ( u, y ( u )),the system E ( u, y ( u )) = 0 is solved forward in time. Then, the information of the forwardsolve is used to solve the adjoint system ( ii ) backwards in time. Lastly, the gradient isobtained from the adjoint state and the objective function derivative. Nonlinear conjugate gradient method
We use a nonlinear conjugate gradient method [14,19] within our descent algorithm to update the iterate as follows d k = −∇ J ( u k , y ( u k )) + ˆ β k d k − , u k +1 = u k + σ k d k . (9)The step size σ k is chosen such that it satisfies the Armijo-Rule [26, 30] J ( u k + σ k d k , y ( u k + σ k d k )) − J ( u k , y ( u k )) ≤ σ k c ∇ J ( u k , y ( u k )) T d k , (10)and the standard Wolfe condition [30] ∇ J ( u k + σ k d k , y ( u k + σ k d k )) T d k ≥ c ∇ J ( u k , y ( u k )) T d k , (11)with 0 < c < c <
1. We start from σ k = 1 and cut the step size in half until (10)-(11) aresatisfied. The parameter ˆ β k is given byˆ β k = k∇ J ( u k +1 , y ( u k +1 )) k d Tk ˆ d k with ˆ d k = ∇ J ( u k +1 , y ( u k +1 )) − ∇ J ( u k , y ( u k )) , which together with conditions (10)-(11) ensures convergence to a minimizer [14]. We referto this method as adjoint method (AC). In the following we apply this general strategy toour macroscopic equation. Macroscopic Lagrangian
We consider objective functions depending on the density, i.e., J c ( u, ρ ). The discrete Lagrangian L = L ( u, ρ , ˜ ρ , ρ , µ , ˜ µ , µ ) is given by L = J c ( u, ρ )+ N ct X s =0 N x (1) X i =1 N x (2) X j =1 µ sij ˜ ρ sij − ρ sij ∆ t c + F (1) ,s, + ij − F (1) ,s, − ij ∆ x (1) ! + N ct X s =0 N x (1) X i =1 N x (2) X j =1 ˜ µ sij ρ sij − ˜ ρ sij ∆ t c + F (2) ,s, + ij − F (2) ,s, − ij ∆ x (2) ! + N ct X s =0 N x (1) X i =1 N x (2) X j =1 ¯ µ sij ρ s +1 ij − ρ sij ∆ t c − B s +1 ij ∆ x (1) ∆ x (2) ! . (12)9e differentiate the Lagrangian with respect to ρ sij ∂ρ sij L = ∂ρ sij J c ( u, ρ ) − µ sij t c − ∂ρ sij F (1) ,s, + ij ∆ x (1) + ∂ρ sij F (1) ,s, − ij ∆ x (1) ! + µ si − j ∂ρ sij F (1) ,s, + i − j ∆ x (1) − µ si +1 j ∂ρ sij F (1) ,s, − i +1 j ∆ x (1) + ¯ µ s − ij (cid:18) t c − ∂ρ sij B sij ∆ x (1) ∆ x (2) (cid:19) − ¯ µ s − i − j ∂ρ sij B si − j ∆ x (1) ∆ x (2) − ¯ µ s − i +1 j ∂ρ sij B si +1 j ∆ x (1) ∆ x (2) − ¯ µ s − ij − ∂ρ sij B sij − ∆ x (1) ∆ x (2) − ¯ µ s − ij +1 ∂ρ sij B sij +1 ∆ x (1) ∆ x (2)! = 0 . Rearranging terms yields T i,j ( µ s − ) = µ s − ij − ∆ t c ∆ x (1) ∆ x (2) (cid:18) µ s − i − j ∂ρ sij B si − j + µ s − i +1 j ∂ρ sij B si +1 j + µ s − ij ∂ρ sij B sij + µ s − ij − ∂ρ sij B sij − + µ s − ij +1 ∂ρ sij B sij +1 (cid:19) = − ∆ t c ∂ρ sij J c ( u, ρ ) + µ sij (cid:16) − λ (1) ∂ρ sij F (1) ,s, + ij + λ (1) ∂ρ sij F (1) ,s, − ij (cid:17) − µ si − j λ (1) ∂ρ sij F (1) ,s, + i − j + µ si +1 j λ (1) ∂ρ sij F (1) ,s, − i +1 j . Using ∂ρ sij B si − j = ∂ρ sij B si +1 j = ∂ρ sij B sij − = ∂ρ sij B sij +1 = k ( ρ sij ) and ∂ρ sij B sij = − k ( ρ sij )on the left-hand side and (16)-(17), see Appendix B, on the right-hand side, leads to T i,j ( µ s − ) = µ s − ij − ∆ t c ∆ x (1) ∆ x (2) k ( ρ sij ) (cid:18) µ s − i − j + µ s − i +1 j − µ s − ij + µ s − ij − + µ s − ij +1 (cid:19) (16) , (17) = − ∆ t c ∂ρ sij J c ( u, ρ ) + µ sij − λ (1) (cid:16)(cid:0) µ sij − µ si +1 j (cid:1) ∂ρ sij F (1) ,s, + ij − (cid:0) µ sij − µ si − j (cid:1) ∂ρ sij F (1) ,s, − ij (cid:17) . This is solved backward in time to obtain the Lagrange parameter ( µ s − ij ) ( i,j ) ∈I Ω . Note thatthe above expression T ( µ s − ) = ( T i,j ( µ s − )) ( i,j ) ∈I Ω defines a coupled system for the Lagrangeparameter of time step s − s − s , see [18]. Proceeding further, we differentiate the Lagrangian with respect to ˜ ρ sij to get ∂ ˜ ρ sij L = µ sij ∆ t c − ˜ µ sij t c − ∂ ˜ ρ sij F (2) ,s, + ij ∆ x (2) + ∂ ˜ ρ sij F (2) ,s, − ij ∆ x (2) ! + ˜ µ sij − ∂ ˜ ρ sij F (2) ,s, + ij − ∆ x (2) − ˜ µ sij +1 ∂ ˜ ρ sij F (2) ,s, − ij +1 ∆ x (2)! = 0 . µ sij = ˜ µ sij (cid:16) − λ (2) ∂ ˜ ρ sij F (2) ,s, + ij + λ (2) ∂ ˜ ρ sij F (2) ,s, − ij (cid:17) − ˜ µ sij − λ (2) ∂ ˜ ρ sij F (2) ,s, + ij − + ˜ µ sij +1 λ (2) ∂ ˜ ρ sij F (2) ,s, − ij +1 (18) , (19) = ˜ µ sij − λ (2) (cid:16)(cid:0) ˜ µ sij − ˜ µ sij +1 (cid:1) ∂ ˜ ρ sij F (2) ,s, + ij − (cid:0) ˜ µ sij − ˜ µ sij − (cid:1) ∂ ˜ ρ sij F (2) ,s, − ij (cid:17) . Finally, we differentiate the Lagrangian with respect to ρ sij to obtain ∂ρ sij L = ˜ µ sij ∆ t c − µ sij ∆ t c ! = 0 ⇒ ˜ µ sij = µ sij . The equality of the Lagrange parameters ˜ µ, µ stems from the fact that the diffusion is solvedimplicitly in the forward system (8) . In the next section, we consider the diffusion coefficient C as control for the macroscopic system, u = C . In this case, the derivative of the Lagrangianwith respect to the control reads ∂ C L = N ct X s =0 N x (1) X i =1 N x (2) X j =1 − C ¯ µ sij ∆ x (1) ∆ x (2) (cid:16) b s +1 i − j + b s +1 i +1 j − b s +1 ij + b s +1 ij − + b s +1 ij +1 (cid:17) . To validate the proposed approach, we consider a toy problem and compare the results ofthe space mapping method to optimal solutions computed directly on the microscopic level.For the toy problem, we control the potential strength A of the microscopic model. Themacroscopic analogue is the diffusion coefficient C . Let N ft ∈ N and ∆ t f ∈ R be the number of time steps and the time step size, respectively.We discretize the fine, microscopic model (1) in time to obtain x s +1 i = x si + ∆ t f v si , v s +1 i = v si + ∆ t f G ( x si , v si ) + A X j = i F ij for s = 1 , . . . N ft . We denote x = ( x si ) ( i,s ) ∈{ ,...,N }×{ ,...,N ft } and v = ( v si ) ( i,s ) ∈{ ,...,N }×{ ,...,N ft } . Furthermore, let J f ( u, x ) be the microscopic objective function. The microscopic Lagrangefunction L ( u, x , v , µ , ˜ µ , µ , ˆ µ ) is then given by L = J f ( u, x ) + N ft X s =0 N X i =1 µ si x (1) ,s +1 i − x (1) ,si ∆ t f − v (1) ,si ! Note that these parameters would be different if the diffusion was solved explicitly using values of ρ s instead of ρ s +1 in the diffusion operator B of (8). N ft X s =0 N X i =1 ˜ µ si x (2) ,s +1 i − x (2) ,si ∆ t f − v (2) ,si ! + N ft X s =0 N X i =1 µ si v (1) ,s +1 i − v (1) ,si ∆ t f − G (1) i − A X j = i F (1) ij + N ft X s =0 N X i =1 ˆ µ si v (2) ,s +1 i − v (2) ,si ∆ t f − G (2) i − A X j = i F (2) ij , (13)where G ( l ) i ( x si , v si ) = − v ( l ) ,si − v ( l ) ( x si ) τ ,F ( l ) ij ( x si , x sj ) = b F ( k x si − x sj k − R ) k x si − x sj k (cid:16) x ( l ) ,si − x ( l ) ,sj (cid:17) if k x si − x sj k < R, l = 1 ,
2. The details of the derivatives of the force terms and the computation of theadjoint state can be found in Appendix C. Moreover, the derivative of the Lagrangian withrespect to the control u = A reads ∂ A L = − N ft X s =0 N X i =1 X j = i (cid:16) µ si F (1) ij + ˆ µ si F (2) ij (cid:17) . We apply ASM and the direct optimization approach AC to the optimization problem (3). Ineach iteration k of the adjoint method for the fine model, a computation of the gradient ∇ J f for the stopping criterion as well as several objective function and gradient evaluations for thecomputation of the step size σ k are required. These evaluations are (mostly) shifted to thecoarse model in ASM. Let Ω = [ − , be the domain and v ( x ) = − x the velocity field of ourtoy example. We investigate whether the macroscopic model is an appropriate coarse modelin the space mapping technique. For the microscopic interactions, we use the force term (4)with b F = 1 /R . Without interaction forces, A = 0, all particles are transported to the centerof the domain (cid:0) x (1) , x (2) (cid:1) = (0 ,
0) within finite time. Certainly, they overlap after some time.With increasing interaction parameter, i.e., increasing A , particles encounter stronger forcesas they collide. Therefore, scattering occurs and the spatial spread increases. We control thespatial spread of the particle ensemble at t = T in the microscopic model, leading to a cost j f ( A, x ) = 1 N N X i h x N ft i , x N ft i i , and the objective function derivative with respect to the state variables x i is given by ∂x ( l ) ,si J f ( A, x ) = (cid:18) N P i h x N ft i , x N ft i i − ω ∗ (cid:19) x ( l ) ,si N if s = N ft , A , the scaling parameter of the interaction force, as microscopic control. Thecoarse, macroscopic model is given by (2) and the spatial spread of the density at t = T isgiven by j c ( C, ρ ) = 1 M X ( i,j ) ρ N ct ij h x ij , x ij i ,∂ρ sij J c ( C, ρ ) = ( h x ij ,x ij i M (cid:16)(cid:16) M P ( i,j ) ρ N ct ij h x ij , x ij i (cid:17) − ω ∗ (cid:17) if s = N ct , M is the total mass, i.e., M = P ( i,j ) ρ ij ∆ x (1) ∆ x (2) . According to [35], the macroscopicdiffusion constant C is given by C = lim R → Z R r R ( r − R ) dr ≈ . We choose τ = 1 /C to simplify the macroscopic diffusion coefficient ( C = A ), compare (2),and consider the parameters in Table 1.Table 1: Model parameters T R N ∆ x (1) = ∆ x (2) ∆ t c ∆ t f m b F τ R C Two particle collectives with N/ ≤ A, C ≤
10 and compare the number of iterations of the twoapproaches to obtain a given accuracy of k J f ( u k , x ) k < − . The step sizes σ k for AC arechosen such that they satisfy the Armijo Rule and standard Wolfe condition (10)-(11) with c = 0 . , c = 0 .
9. If an iterate violates the box constraint, it is projected into the feasibleset.In the space mapping algorithm, the parameter extraction T ( u k ) is the solution of anoptimization problem in the coarse model space, see Definition 2.1. The optimization issolved via adjoint calculus with c , c as chosen above and u start = T ( u k − ), which we expectto be close to T ( u k ). Further, to determine the step size σ k for the control update, we considerstep sizes such that u k +1 = u k + σ k d k satisfies kT ( u k +1 ) − u c ∗ k < kT ( u k ) − u c ∗ k and thusdecreases the distance of the parameter extraction to the coarse model optimal control fromone space mapping iteration to the next.The optimization results and computation times (obtained as average computation timeof 20 runs on an Intel(R) Core(TM) i7-6700 CPU 3.40 GHz, 4 Cores) for target values ω ∗ ∈ { , , } are compared in Table 2. Both optimization approaches start far from theoptima at u = 8. Optimal controls u AC ∗ and u ASM ∗ closely match. The objective functionevaluations J f ( u AC ∗ , x ), J c ( u c ∗ , ρ ) describe the accuracy at which the fine and coarse modelcontrol problem are solved, respectively. J f ( u ASM ∗ , x ) denotes the accuracy of the spacemapping optimal control when the control is plugged into the fine model and the fine model To ensure comparability of the two optimization approaches, we use the same stopping criterion k J f ( u k , x ) k < tolerance = 10 − . ω ∗ = 1 ω ∗ = 2 ω ∗ = 3 u u AC ∗ u ASM ∗ u ASM = u c ∗ u ASM u ASM u ASM J f (cid:0) u AC ∗ , x (cid:1) . · − . · − . · − J c ( u c ∗ , ρ ) 6 . · − . · − . · − J f ( u ASM ∗ , x ) 2 . · − . · − . · − t AC [s] 126.1635 210.08 198.05 t ASM [s] 153.93 56.07 452.84objective function is evaluated. Note that the ASM approach in general does not ensure adecent in the microscopic objective function value J f ( u k , x ) during the iterative process andpurely relies on the idea to reduce the distance kT ( u k ) − u c ∗ k . However, ASM also generatessmall target values J f ( u ASM ∗ , x ) and therefore validates the proposed approach. Moreover,the model responses of the optimal controls illustrate the similarity of the fine and the coarsemodel, see Figure 3c-3d. The space mapping iteration finishes within two to four iterationsand therefore needs less iterations than the pure optimization on the microscopic level here,see Figure 2. Note that each of the space mapping iterations involves the solution of the coarseoptimal control problem. Hence, the comparison of the iterations may be misleading and weconsider the computation times as additional feature. It turns out that the iteration timesvary and therefore this data does not allow to prioritize one of the approaches. Obviously,the times depend on the number of particles, the space and time discretizations.0 2 4 6 810 − − − − Iteration k J ( u k , x ) ω ∗ = 1 (AC) ω ∗ = 1 (ASM) ω ∗ = 2 (AC) ω ∗ = 2 (ASM) ω ∗ = 3 (AC) ω ∗ = 3 (ASM) Figure 2: Objective function value of iterates.14 (a) Initial positions ( t = 0). -5 0 5-505 (b) Initial density ( t = 0). -5 0 5-505 (c) Final positions ( A = u ASM ∗ , t = T ). -5 0 5-505 (d) Final density ( C = u c ∗ , t = T ). Figure 3: Initial conditions and space mapping solution for ω ∗ = 3. In the following, we consider problems with dynamics restricted to a spatial domain withboundaries. For the microscopic simulations we add artificial boundary behaviour, tailoredfor each application, to the ODEs.
We consider a scenario similar to the evacuation of N individuals from a domain with obsta-cles. The goal is to gather as many individuals as possible at a given location x s ∈ Ω ⊂ R up to the time T . The control is the evacuation point x s = ( x (1) s , x (2) s ). We model this taskwith the help of the following cost functions j f ( x s , x ) = 1 N X ( i ) h x N ft i − x s , x N ft i − x s i ,j c ( x s , ρ ) = 1 M X ( i,j ) ρ N ct ij h x ij − x s , x ij − x s i , t = T with respect to the location of the source.The velocity v ( x ) is based on the solution to the eikonal equation with point source x s .In more detail, we solve the eikonal equation |∇ T ( x ) | = 1 f ( x ) , x ∈ Ω , T ( x s ) = 0 , where T ( x ) is the minimal amount of time required to travel from from x to x s and f ( x ) isthe speed of travel. We choose f ( x ) = 1 and set the velocity field to¯ v ( x ) = ∇ T ( x ) k∇ T ( x ) k min {k x − x s k , } . (14)In this way, the velocity vectors point into the direction of the gradient of the solution to theeikonal equation and the speed depends on the distance of the particle to x s . The particles areexpected to slow down when approaching x s and the maximum velocity is bounded k ¯ v ( x ) k ≤
1. The solution to the eikonal equation on the 2-D cartesian grid is computed using the fastmarching algorithm implemented in C with Matlab interface . The travel time isoclines ofthe eikonal equation and the corresponding velocity field are illustrated in Figure 4. Notethat we have to set the travel time inside the boundary to a finite value to obtain a smoothvelocity field. (a) Travel time. (b) Velocity field. Figure 4: Solution of the Eikonal equation in a bounded domain.The derivative of the macroscopic Lagrangian (12) with respect to the location of thepoint source, u = x s , is given by ∂x ( l ) s L = N ct X s =0 N x (1) X i =1 N x (2) X j =1 µ sij ∆ x (1) (cid:16) ∂x ( l ) s F (1) ,s, + ij − ∂x ( l ) s F (1) ,s, − ij (cid:17) + N ct X s =0 N x (1) X i =1 N x (2) X j =1 ˜ µ sij ∆ x (2) (cid:16) ∂x ( l ) s F (2) ,s, + ij − ∂x ( l ) s F (2) ,s, − ij (cid:17) , byVolkmar Bornemann and Christian Ludwig. ∂x ( l ) s F (1) ,s, + ij = ρ sij ∂x ( l ) s v (1) ij if v (1) ij ≥ , ( i + 1 , j ) ∈ I Ω \ I ∂ Ω ,ρ si +1 j ∂x ( l ) s v (1) ij if v (1) ij < , ( i, j ) ∈ I Ω \ I ∂ Ω , l = 1 , ,∂x ( l ) s F (1) ,s, − ij = ρ si − j ∂x ( l ) s v (1) i − j if v (1) i − j ≥ , ( i, j ) ∈ I Ω \ I ∂ Ω ,ρ sij ∂x ( l ) s v (1) i − j if v (1) i − j < , ( i − , j ) ∈ I Ω \ I ∂ Ω , l = 1 , ∂x ( l ) s F (2) ,s, + ij , ∂x ( l ) s F (2) ,s, − ij are defined analogously.To obtain the partial derivatives ∂x ( l ) s v ( k ) ij , the travel-time source derivative of the eikonalequation is required. It is approximated numerically with finite differences ∂x ( l ) s v ( k ) ij ≈ v ( k ) ij ( x s + ∆ x ( l ) e ( l ) ) − v ( k ) ij ( x s − ∆ x ( l ) e ( l ) )2∆ x ( l ) , k = 1 , , where e (1) = (1 , T , e (2) = (0 , T denote the unit vectors. To investigate the robustness of the space mapping algorithm, we consider different obstaclesin the microscopic and macroscopic setting. Let Ω = [ − , be the domain. For the micro-scopic model we define an internal boundary 2 ≤ x (1) ≤ , ≤ x (2) ≤
8, see Figure 6a. For themacroscopic setting the obstacle is shifted by gap ≥ x (2) -coordinate. Additionally,we shift the initial density with the same gap , see Figure 6b. It is interesting to see whetherthe space mapping technique is able to recognize the linear shift between the microscopic andthe macroscopic model. This is not trivial due to the non-linearities in the models and theadditional non-linearities induced by the boundary interactions. Macroscopically, we use thezero flux conditions (6) at the boundary. Microscopically, a boundary correction is applied,that means, a particle which would hypothetically enter the boundary is reflected into thedomain, see Figure 5. Boundary
Figure 5: Reflection at the boundary.For computational simplicity, we restrict the admissible set of the controls U fad = U cad = [ − , × [ − , , i.e., the point source is located to the left-hand side of the obstacle.17 (a) Microscopic domain and initial positions x . -5 0 5-505 (b) Macroscopic domain and initial density ρ . Figure 6: Initial conditions with gap = 2.The velocity v ( x ) , given by (14), is restricted to the grid with spatial step sizes ∆ x (1) =∆ x (2) = 0 . x s ∈ C ij is thereby projected to the cell center of the corresponding cell P ( x s ) = x ij , x s ∈ C ij , (15)where x ij = ( i ∆ x (1) , j ∆ x (2) ). The continuous velocity field of the microscopic model isapproximated by the eikonal solution on a grid with smaller grid size.We choose the parameters from Section 3.2, Table 1 except for T which is set to T = 5.Moreover, we consider A, C = 0 .
87 for which the macroscopic and microscopic model behaviormatch well in the situation without boundary interactions, see Table 1 in Section 3.1.We apply the space mapping method to the described scenario with gap ∈ { , , , } .Due to the grid approximation, we formally move from continuous optimization problems todiscrete ones which we approximately solve by applying ASM (and AC for the parameterextraction within ASM) for continuous optimization and project each iterate to the gridusing (15). In general, due to the grid approximation we cannot ensure that arbitrarily smallstepsizes σ k ≥ c >
0. Therefore, we choose c = 0 , c = 0 . kT ( u k + σ k d k ) − u c ∗ k ≤ kT ( u k ) − u c ∗ k . As starting point for the parameter extraction, we choose u start = u c ∗ and tolerance isset to 10 − . We remark that the parameter extraction does not have a unique solution here,therefore, providing u start = u c ∗ as starting value is used to stipulate the parameter extractionidentifying a solution T ( u k ) near u c ∗ .The macroscopic optimal solution with the corresponding gap is given by u c ∗ = [1 . , − . gap ], compare Table 3. For gap = 0, we have T ( u c ∗ ) = u c ∗ and the space mapping is finishedat k = 1 since the model optima coincide. For gap >
0, the parameter extraction identifiesa shift between the modeling hierarchies since the coarse model optimum is not optimal forthe fine model. Indeed, the application of the coarse model optimal control leads to collision18able 3: Iterates of ASM. gap
Iteration u k j f ( u k , x ) T ( u k ) j c ( T ( u k ) , ρ )0 k = 1 [1.5, -0.5] 3.0652 [1.5, -0.5] 3.02181 k = 1 [1.5, 0.5] 3.5725 [1.5, 1.5] 3.7905 k = 2 [1.5, -0.5] 3.0652 [1.5, 0.5] 3.02182 k = 1 [1.5, 1.5] 4.8059 [1.5, 3] 4.4370 k = 2 [1.5, 0] 3.2800 [1.5, 2] 3.3058 k = 3 [1.5, -0.5] 3.0625 [1.5, 1.5] 3.02183 k = 1 [1.5, 2.5] 7.1550 [1.5, 5.5] 8.2927 k = 2 [1.5, -0.5] 3.0652 [1.5, 2.5] 3.0218of the particles with the boundary and therefore delays gathering of the particles around thesource location x c ∗ , see Figure 7b. Space mapping for gap ∈ { , } finishes within one iterationsince the parameter extraction of u is given by T ( u ) = u + [0 , gap ] and T ( u ) = u c ∗ . For gap = 2, the first parameter extraction underestimates the shift in x (2) -direction and thus,two iterations are needed to obtain the optimal solution, see Table 3. -5 0 5-505 (a) Density ( u = u c ∗ ). -5 0 5-505 (b) Positions ( u = u c ∗ ). -5 0 5-505 (c) Positions ( u = u ASM ∗ ). Figure 7: Solutions of the space mapping iterates at t = T with gap = 2.We investigated the need for additional iterations in more detail. It turned out that thebehavior is caused by the discretization of the optimization problem on the macroscopic grid.We have j c ([1 . , . , ρ ) = 4 . j c ([1 . , . , ρ ) = 5 . T ([1 . , . T ([1 . , . . , . j f ([1 . , . , x ). The microscopic and macroscopicoptimal solutions are shown in Figure 7. In the following, the control of a material flow system with a belt conveyor is considered.Similar control problems have been investigated in [18]. We use the microscopic model pro-posed in [21] that describes the transport of homogeneous parts with mass m and radius R on a conveyor belt Ω ⊂ R with velocity v T = ( v (1) T , T ∈ R . The bottom friction G ( v ) = − γ b ( v − v T ) , γ b ≥ F modelling interparticle repulsion is given by F ( x ) = ( c m (2 R − x ) x k x k if k x k ≤ R, c m > v (1) T . Theparticles (goods) are redirected at a deflector to channel them. A common way to describe suchboundary interactions is to apply obstacle forces which are modeled similar to the interactionforce between particles [25]. Here, we consider F obst ( x ) = ( c obst ( R − x ) x k x k if k x k ≤ R, x is the distance to the closest point of the boundary. Note that this is a slight variationof [25] as the interaction takes place with the closest boundary point only, see also Remark 3.Further note that the computation of adjoint states analogous to Section 3.1 can becomevery complicated for this boundary interaction. We therefore avoid the computation of themicroscopic optimal solution u f ∗ and use the proposed space mapping approach instead.The performance evaluation used here is the number of goods in the domain Ω at time T given by j f ( v (1) T , x ) = N X i =1 x Nfti ∈ Ω ! − ω ∗ . The transport is modeled macroscopically with the advection-diffusion equation (2). Thecorresponding macroscopic performance evaluation is given by j c ( v (1) T , ρ ) = NM X ( i,j ): x ij ∈ Ω ρ N ct ij ∆ x (1) ∆ x (2) − ω ∗ . We apply zero-flux boundary conditions (6) for the advective and the diffusive flux at thedeflector.
Remark 3.
Note that if the boundary was discretized with stationary points and boundaryinteraction was modeled with the help of soft core interaction forces in the microscopic setting,as for example in [25], the model would allow for direct optimization. Nevertheless, manyapplications involve a huge number of (tiny) goods, for example the production of screws.The pairwise microscopic interactions would blow up the computational effort, hence it makessense to consider a macroscopic approximation for optimization tasks.
We investigate the robustness of the space mapping technique for different diffusion coefficients C and investigate whether variations in the diffusion coefficient affect the performance of thespace mapping algorithm or the accuracy of the final result. We set Ω = [0 , . × [0 , . = 100, ω ∗ = 25, u = 0 . C ∈ { , . , . , } and stopping criterion kT ( u k ) − u c ∗ k < − . Thevalues of the other model parameters are given in Table 4 and the results are summarized inTable 5. Each parameter extraction uses u start = T ( u k − ) and has an optimality tolerance of10 − . Table 4: Model parameters T R N ∆ x (1) = ∆ x (2) ∆ t c ∆ t f m c m c obst τ · − · − c m CC u c ∗ u ASM ∗ j f (cid:0) u ASM ∗ , x (cid:1)
26 25 25 24Iterations 5 4 4 5For every diffusion coefficient, space mapping finishes in less than five iterations andTable 5 indicates that the microscopic optimal control lies in the interval (0 . , . C = 0,which is pure advection (without diffusion) in the macroscopic model, the ASM algorithmis able to identify a solution close to the microscopic optimal control. This underlines therobustness of the space mapping algorithm and emphasizes that even a very rough depiction ofthe underlying process can serve as coarse model. However, the advection-diffusion equationswith C > (a) Density. (b) Positions.
Figure 8: Density distribution and particles at t = 0 . u = 0 . Conclusion
We proposed space mapping-based optimization algorithms for interacting particle systems.The coarse model of the space mapping is chosen to be the macroscopic approximation of thefine model that considers every single particle. The algorithm is validated with the help of atoy problem that allows for the direct computation of optimal controls on the particle level.Further, the algorithm was tested in scenarios where the direct computation of microscopicgradients in infeasible due to boundary conditions that do not naturally appear in the particlesystem formulation. Numerical studies underline the feasibility of the approach and motivateto use it in further applications.
Acknowledgements
J.W. and S.G. acknowledge support from the DFG within the project GO1920/7-1. S.G. isfurther supported from the DFG within the project GO1920/10-1. C.T. was supported bythe European social Fund and by the Ministry Of Science, Research and the Arts Baden-W¨urttemberg.
Appendix A
The aggressive space mapping algorithm used to obtain the numerical results is given by
Algorithm 1
Aggressive space mapping
Require: u , tolerance Compute u c ∗ iteratively with adjoint calculus and starting value u k = 1 u = u c ∗ Compute T ( u ) with adjoint calculus and starting value u start while kT ( u k ) − u c ∗ k > tolerance do d k = − ( T ( u k ) − u c ∗ ) Choose step length σ k (such that kT ( u k + σ k d k ) − u c ∗ k ≤ kT ( u k ) − u c ∗ k ) u k +1 = u k + σ k d k Compute update T ( u k +1 ) with adjoint calculus and starting value u start k = k + 1 end while ppendix B We provide more details on the derivatives in the macroscopic Lagrangian (12). ∂ρ sij F (1) ,s, − ij = ( v (1) i − j if v (1) i − j < , ( i − , j ) ∈ I Ω \ I ∂ Ω , ∂ρ sij F (1) ,s, + i − j , (16) ∂ρ sij F (1) ,s, + ij = ( v (1) ij if v (1) ij ≥ , ( i + 1 , j ) ∈ I Ω \ I ∂ Ω , ∂ρ sij F (1) ,s, − i +1 j , (17) ∂ ˜ ρ sij F (2) ,s, − ij = ( v (2) ij − if v (2) ij − < , ( i, j − ∈ I Ω \ I ∂ Ω , ∂ ˜ ρ sij F (2) ,s, + ij − , (18) ∂ ˜ ρ sij F (2) ,s, + ij = ( v (2) ij if v (2) ij ≥ , ( i, j + 1) ∈ I Ω \ I ∂ Ω , ∂ ˜ ρ sij F (2) ,s, − ij +1 . (19) Appendix C
We provide more details on the derivatives of the microscopic Lagrangian (13). The derivativesof the terms
G, F for k, l ∈ { , } are defined in the following. The derivatives of the velocityselection mechanism with respect to the state variables are ∂x ( l ) ,si G ( k ) i = ∂x ( l ) ,si v ( k ) ( x si ) τ ,∂v ( l ) ,si G ( k ) i = ( − τ if l = k, F are ∂x ( l ) ,si F ( k ) ij = b F ( k x si − x sj k − R ) k x si − x sj k + ( x ( l ) ,si − x ( l ) ,sj ) ∂x ( l ) ,si b F ( k x si − x sj k − R ) k x si − x sj k if k x si − x sj k < R, l = k,∂x ( l ) ,si b F ( k x si − x sj k − R ) k x si − x sj k (cid:16) x ( k ) ,si ) − x ( k ) ,sj (cid:17) if k x si − x sj k < R, l = k, ∂x ( l ) ,si b F (cid:16) k x si − x sj k − R (cid:17) k x si − x sj k = (cid:18) b F (cid:16) k x si − x sj k − R (cid:17) k x si − x sj k − b F (cid:16) k x si − x sj k − R (cid:17) k x si − x sj k (cid:19) ( x ( l ) ,si − x ( l ) ,sj ) . x (1) ,si to obtain µ s − i = − ∆ t f ∂x (1) ,si J f ( u, x ) + µ si + ∆ t f (cid:18) ¯ µ si ∂x (1) ,si G (1) ,si + A X j = i ∂x (1) ,si F (1) ij (cid:0) µ si − µ sj (cid:1) + ˆ µ si ∂x (1) ,si G (2) i + A X j = i ∂x (1) ,si F (2) ij (cid:0) ˆ µ si − ˆ µ sj (cid:1) (cid:19) . Second, we differentiate with respect to x (2) ,si to obtain˜ µ s − i = − ∆ t f ∂x (2) ,si J f ( u, x ) + ˜ µ si + ∆ t f (cid:18) µ si ∂x (2) ,si G (1) i + A X j = i ∂x (2) ,si F (1) ij (cid:0) µ si − µ sj (cid:1) + ˆ µ si ∂x (2) ,si G (2) i + A X j = i ∂x (2) ,si F (2) ij (cid:0) ˆ µ si − ˆ µ sj (cid:1) (cid:19) . Third, we differentiate with respect to v (1) ,si and obtain µ s − i = ∆ t f µ si + µ si + ∆ t f (cid:16) ∂v (1) ,si G (1) i µ si + ∂v (1) ,si G (2) i ˆ µ si (cid:17) . Lastly, we differentiate with respect to v (2) ,si and obtainˆ µ s − i = ∆ t f ˜ µ si + ˆ µ si + ∆ t f (cid:16) ∂v (2) ,si G (1) i µ si + ∂v (2) ,si G (2) i ˆ µ si (cid:17) . References [1]
G. Albi and L. Pareschi , Modeling self-organized systems interacting with few indi-viduals: from microscopic to macroscopic dynamics , Applied Mathematics Letters, 26(2013), pp. 397–401.[2]
D. Armbruster, S. Martin, and A. Thatcher , Elastic and inelastic collisions ofswarms , Physica D: Nonlinear Phenomena, 344 (2017), pp. 45–57.[3]
M. Bakr, J. Bandler, K. Madsen, and J. Søndergaard , An Introduction to theSpace Mapping Technique , Optimization and Engineering, 2 (2001), pp. 369–384.[4]
M. H. Bakr, J. W. Bandler, K. Madsen, and J. Søndergaard , Review of thespace mapping approach to engineering optimization and modeling , Optimization andEngineering, 1 (2000), pp. 241–276.[5]
M. K. Banda and M. Herty , Towards a space mapping approach to dynamic compres-sor optimization of gas networks , Optimal control applications and methods, 32 (2011),pp. 253–269.[6]
J. W. Bandler, R. M. Biernacki, S. H. Chen, P. A. Grobelny, and R. H. Hem-mers , Space mapping technique for electromagnetic optimization , IEEE Transactions onmicrowave theory and techniques, 42 (1994), pp. 2536–2544.247]
J. W. Bandler, R. M. Biernacki, S. H. Chen, R. H. Hemmers, and K. Madsen , Electromagnetic optimization exploiting aggressive space mapping , IEEE Transactions onMicrowave Theory and Techniques, 43 (1995), pp. 2874–2882.[8]
J. W. Bandler, Q. S. Cheng, S. A. Dakroury, A. S. Mohamed, M. H. Bakr,K. Madsen, and J. Sondergaard , Space mapping: the state of the art , IEEE Trans-actions on Microwave theory and techniques, 52 (2004), pp. 337–361.[9]
S. Boi, V. Capasso, and D. Morale , Modeling the aggregative behavior of ants ofthe species Polyergus rufescens , Nonlinear Analysis: Real World Applications, 1 (2000),pp. 163–176.[10]
M. Burger, R. Pinnau, C. Totzeck, and O. Tse , Mean-field optimal control andoptimality conditions in the space of probability measures , accepted for publication inSCICON, (2020).[11]
M. Burger, R. Pinnau, C. Totzeck, O. Tse, and A. Roth , Instantaneous controlof interacting particle systems in the mean-field limit , Journal of Computational Physics,405 (2020), p. 109181.[12]
J. A. Carrillo, M. Fornasier, G. Toscani, and F. Vecil , Particle, kinetic, andhydrodynamic models of swarming , in Mathematical modeling of collective behavior insocio-economic and life sciences, Springer, 2010, pp. 297–336.[13]
Y.-l. Chuang, M. R. D’Orsogna, D. Marthaler, A. L. Bertozzi, and L. S.Chayes , State transitions and the continuum limit for a 2D interacting, self-propelledparticle system , Physica D: Nonlinear Phenomena, 232 (2007), pp. 33–47.[14]
Y. H. Dai and Y. Yuan , A Nonlinear Conjugate Gradient Method with a Strong GlobalConvergence Property , SIAM Journal on Optimization, 10 (1999), pp. 177–182.[15]
M. R. D’Orsogna, Y. L. Chuang, A. L. Bertozzi, and L. S. Chayes , Self-propelledparticles with soft-core interactions: Patterns, stability, and collapse , Phys. Rev. Lett.,96 (2006), p. 104302.[16]
M. R. D’Orsogna, Y.-L. Chuang, A. L. Bertozzi, and L. S. Chayes , Self-propelled particles with soft-core interactions: patterns, stability, and collapse , Physicalreview letters, 96 (2006), p. 104302.[17]
D. Echeverr´ıa and P. W. Hemker , Space Mapping and Defect Correction , Compu-tational Methods in Applied Mathematics, 5 (2005), pp. 107–136.[18]
M. Erbrich, S. G¨ottlich, and M. Pfirsching , Optimal packing of material flow onconveyor belts , Optimization and Engineering, 19 (2018), pp. 71–96.[19]
R. Fletcher and C. M. Reeves , Function minimization by conjugate gradients , Thecomputer journal, 7 (1964), pp. 149–154.[20]
F. Golse , The mean-field limit for the dynamics of large particle systems , Journ´ees´equations aux d´eriv´ees partielles, (2003), pp. 1–47.2521]
S. G¨ottlich, S. Hoher, P. Schindler, V. Schleper, and A. Verl , Modeling, sim-ulation and validation of material flow on conveyor belts , Applied Mathematical Mod-elling, 38 (2014), pp. 3295–3313.[22]
S. G¨ottlich, A. Klar, and S. Tiwari , Complex material flow problems: a multi-scalemodel hierarchy and particle methods , Journal of Engineering Mathematics, 92 (2015),pp. 15–29.[23]
S. G¨ottlich, S. Knapp, and P. Schillen , A pedestrian flow model with stochasticvelocities: Microscopic and macroscopic approaches , Kinetic and Related Models, 11(2018), pp. 1333–1358.[24]
S. G¨ottlich and C. Teuber , Space mapping techniques for the optimal inflow controlof transmission lines , Optim. Methods Softw., 33 (2018), pp. 120–139.[25]
D. Helbing and P. Moln´ar , Social force model for pedestrian dynamics , PhysicalReview E, 51 (1995), pp. 4282–4286.[26]
M. Hinze, R. Pinnau, M. Ulbrich, and S. Ulbrich , Optimization with PDE Con-straints , Springer, 2009.[27]
H. Holden, K. H. Karlsen, and K. A. Lie , Operator splitting methods for degen-erate convection-diffusion equations II: Numerical examples with emphasis on reservoirsimulation and sedimentation , Computational Geosciences, 4 (2000), pp. 287–322.[28]
A. L. Koch and D. White , The social lifestyle of myxobacteria , BioEssays, 20 (1998),pp. 1030–1038.[29]
N. K. Mahato, A. Klar, and S. Tiwari , Particle methods for multi-group pedestrianflow , Applied Mathematical Modelling, 53 (2018), pp. 447–461.[30]
J. Nocedal and S. Wright , Numerical optimization , Springer Science & BusinessMedia, 2006.[31]
G. Toscani et al. , Kinetic models of opinion formation , Communications in mathe-matical sciences, 4 (2006), pp. 481–496.[32]
C. Totzeck , An anisotropic interaction model with collision avoidance , Kinetic andrelated models, 13 (2020), pp. 1219–1242.[33]
C. Totzeck and R. Pinnau , Space mapping-based receding horizon control for stochas-tic interacting particle systems: dogs herding sheep , Journal of Mathematics in Industry,10 (2020), pp. 1–19.[34]
F. Tr¨oltzsch , Optimale Steuerung partieller Differentialgleichungen , vol. 2, Springer,2005.[35]