Autonomous Planning for Multiple Aerial Cinematographers
Luis-Evaristo Caraballo, Ángel Montes-Romero, José-Miguel Díaz-Báñez, Jesús Capitán, Arturo Torres-González, Aníbal Ollero
AAutonomous Planning for Multiple Aerial Cinematographers
Luis-Evaristo Caraballo , ´Angel Montes-Romero , Jos´e-Miguel D´ıaz-B´a˜nez , Jes´us Capit´an ,Arturo Torres-Gonz´alez and An´ıbal Ollero Abstract — This paper proposes a planning algorithm forautonomous media production with multiple
Unmanned AerialVehicles (UAVs) in outdoor events. Given filming tasks specifiedby a media
Director , we formulate an optimization problemto maximize the filming time considering battery constraints.As we conjecture that the problem is NP-hard, we considera discretization version, and propose a graph-based algorithmthat can find an optimal solution of the discrete problem fora single UAV in polynomial time. Then, a greedy strategy isapplied to solve the problem sequentially for multiple UAVs.We demonstrate that our algorithm is efficient for small teams(3-5 UAVs) and that its performance is close to the optimum. Weshowcase our system in field experiments carrying out actualmedia production in an outdoor scenario with multiple UAVs.
I. I
NTRODUCTION
The use of
Unmanned Aerial Vehicles (UAVs) for cine-matography and audio-visual applications is becoming quitetrendy. First, small UAVs are not expensive and can beequipped with high-quality cameras that are available in themarket for amateur and professional users. Second, they canproduce unique video streams thanks to their maneuverabilityand their advantageous viewpoints when flying.Currently, these systems use two operators per aerial cam-era at least: a pilot for the vehicle and a camera operator. Ourfinal objective is to develop a team of cooperative UAV cin-ematographers with autonomous functionalities. This wouldreduce the number of human operators and would allow themedia
Director to focus on artistic aspects. This Directoris the person in charge of the media production, specifyingdesired shots to cover a certain event. Then, the multi-UAVsystem should be able to plan and execute autonomouslythe designed shots. This planning problem is challenging inlarge-scale scenarios, due to multiple constraints: (i) spatialconstraints related to no-fly areas; (ii) temporal constraintsregarding the events to film; and (iii) resource constraintsrelated to the battery endurance of each UAV.This paper proposes a multi-UAV approach for au-tonomous cinematography planning, aimed at filming out-door events such as cycling or boat races (see Fig. 1). Thework lies within the framework of the EU-funded projectMULTIDRONE , which has developed autonomous mediaproduction with small teams of UAVs. In previous work [1],we proposed a graphical interface and a novel language sothat the media production Director could design shootingmissions . Thus, the Director specifies the characteristics of Department of Applied Mathematics II, University of Seville, Seville,Spain GRVC Robotics Lab, University of Seville, Seville, Spain https://multidrone.eu Fig. 1: Experimental media production with three UAVstaking different shots in parallel. Bottom, aerial view ofthe experiment with two moving cyclists. Top, images takenfrom the cameras on board each UAV.multiple shots that should be executed to film a given event.We call these shots filming tasks or shooting actions , and theyare described basically by their starting time and duration, thestarting camera position and relative positioning with respectto the target, and the geometrical formation of the camerasdepending on the type of shot.In this paper, we use the Director’s shooting missionas input and propose an algorithm to compute plans forthe UAVs. Given their bounded battery, the problem toschedule UAVs to cover the filming tasks is a complexoptimization problem hard to solve. After surveying relatedworks (Section II), we contribute by formulating this noveloptimization problem for autonomous filming (Section III).Then, we propose a graph-based solution to the problem(Section IV) that can find an optimal solution for a singleUAV and approximate solutions for the multi-UAV case.Our algorithm is deterministic and assumes that trajec-tories of all moving targets in the event can be predictedwith a given motion model. However, in order to accountfor model uncertainties and unexpected events (e.g., a UAVfailure), we enhance the planner with a module that monitorsthe mission execution and triggers re-planning to addressdeviation from the original plan. Thus, plans are computedonline in a centralized manner, which is feasible for smallteams (3-5 UAVs). Moreover, we evaluate the performance ofour algorithm through simulation (Section V) and showcaseits integration for actual media production with multipleUAVs (Section VI). a r X i v : . [ c s . R O ] M a y I. R
ELATED W ORK
There are several commercial products, such as DJI GO,AirDog, 3DR SOLO or Yuneec Typhoon, that implement follow-me autonomous capabilities to track a target visuallyor with a GPS. These approaches do not consider high-level cinematography principles for shot performance andjust try to keep the target on the image. Additionally,there are some recent works to produce semi-autonomousaerial cinematographers [2], [3]. In these works, the user orDirector specifies high-level commands such as shot typesand positions, and the drone is in charge of implementingautonomously the navigation functionality. In [2], an outdoorapplication to film people is proposed, and different typesof shots from the cinematography literature are introduced(e.g., close-up, external, over-the-shoulder, etc). In [3], aniterative quadratic optimization problem is formulated toobtain smooth trajectories for the camera and the look-atpoint (i.e., place where the camera is pointing at). No timeconstraints nor moving targets are included. [4] propose anautonomous system for outdoor, aerial cinematography, butthey do not explore the multi-UAV problem.Some works [5] propose camera motion planning toachieve smooth trajectories, but in this paper, we focus onhigh-level planning. This means how to distribute filmingtasks among the team members. As aerial shots can beviewed as tasks to be executed by the UAVs, algorithmsfor multi-robot task allocation [6] are of interest. In [7], acentralized algorithm based on a Monte-Carlo Tree Searchis presented for multi-robot task allocation. The algorithmexploits the branch&bound paradigm to solve the problem.Others [8] have proposed both centralized and decentralizedmethods to deal with disaster management.For aerial cinematography, it is also relevant to considerformulations where there are time constraints associatedwith the tasks [6], as visual shots may be triggered andexecuted with specific timelines. An auction-based methodis presented in [9]. They decouple precedence and temporalconstraints and treat them separately with two differentalgorithms that can work offline (producing a schedule oftasks in advance) and online (scheduling tasks as they arrive).[10] present a distributed algorithm for multi-robot taskassignment where the tasks have to be completed withingiven deadlines, taking into account the battery constraints.For the special case of constant task duration they present adistributed algorithm that is provably almost optimal.Recently, a similar optimization problem to that statedin this paper, has been addressed for static targets andcontinuous time intervals [11]. They prove that the problemwith unlimited battery lifetime can be solved in polynomialtime by finding a maximum weight matching in a weightedbipartite graph. They conjecture that the optimization prob-lem with limited battery lifetime is NP-hard and thus, thesame conjecture remains for moving targets.III. P
ROBLEM S TATEMENT
In this section, we formally define the
Cycling FilmingProblem (CFP). Suppose that an outdoor event is to be filmed by a set of k UAVs with cameras and limited batteryendurance given by a parameter b . The media Directorspecifies a set T = { T , . . . , T n } of shooting actions (tasks)determined by waypoints and time intervals during which theUAVs should film the moving targets (e.g., cyclists). That is,a shooting action T i ∈ T , that starts at time t and ends attime t (cid:48) > t , is determined by a list ( p , t ) , . . . , ( p s , t s ) ofpairs, where t = t < · · · < t s = t (cid:48) and p j is the filmingposition of the camera at time t j . A task, or part of it,can be performed by one or multiple UAVs (e.g., a UAVmay film the first part of a task and the rest may be filmedby a different one). A flight plan for a UAV is a sequence P = { I , . . . , I m } such that for every j , I j is a subintervalof a task in T . Denote by P = { P , . . . , P k } the set of theflight plans for the k UAVs. The goal is to assign flight plansto each UAV in order to film as much as possible of the set T . The filming time of a flight plan assignment P is the sumof the time lengths of the subintervals of the tasks coveredby the flight plans. Formally, it is defined as F T ( P ) = n (cid:88) i =1 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:91) I ∈ ∪ P j ( I ∩ T i ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . There is one or more
Base Stations (BS) that can bestatic or dynamic, where the UAVs start from and wherethey can go back to recharge their battery at any time. It isassumed that it is possible to compute a path between anypair of locations. Also, having this path, it is assumed thatthe required time to travel along the path can be estimated aswell as the expected cost in terms of battery. Thus, it can bechecked at any moment whether a UAV has enough batteryto return to a BS.
Problem 3.1: (CFP) Given a set T of n tasks and k UAVs,each with battery endurance b , find a flight plan assignmentmaximizing the filming time. Remark 1:
For simplicity of presentation, we have as-sumed that charging at BS occurs instantaneously. However,our results could be easily extended considering a time δ torecharge the battery when a UAV arrives at a BS.Notice that, in a solution, a UAV can enter into a task or itcan leave it at any instant of the task’s time interval. Hence,CFP is in general a continuous optimization problem that isconjectured to be NP-hard [11].IV. T HE A PPROXIMATION A LGORITHM
In this section, we consider a discretization of the generalproblem that allows us to obtain an approximation to theCFP in polynomial time. Our discretization is based on theconstruction of a directed acyclic graph G whose verticesare pairs ( p, t ) , where p is a position and t is an associatedinstant of time. In the following we briefly show how to buildthis graph.Let α << b be a real positive value. For every twoconsecutive waypoints of a task, subdivide the task intopieces with time length α except for maybe the last piece. These filming positions are computed after predicting the target trajec-tory and depending on the shot type. T p q BS BS Fig. 2: The hollow points represent the introduced/addedwaypoints in the tasks in the discretization process. Thered (dark) path is an approximation using the discretizationwaypoints of the light (pink) path. BS and BS refer to amoving Base Station in two different instants of time.Let T i denote the augmented task adding the new waypointsof the partition as illustrated in Fig. 2.The waypoints of the tasks T i , i = 1 , · · · , n are verticesof the graph G . For every two consecutive pairs ( p j , t j ) and ( p j +1 , t j +1 ) in a task T i add the edge (( p j , t j ) , ( p j +1 , t j +1 )) to G . We say that a waypoint ( p (cid:48) , t (cid:48) ) is reachable from awaypoint ( p, t ) if t (cid:48) − t is greater than or equal to the requiredtime to travel from p to p (cid:48) . Now, given two tasks, T i and T (cid:48) i , connect every pair ( p, t ) in T i with the first waypoint ( p (cid:48) , t (cid:48) ) ∈ T i (cid:48) which is reachable from ( p, t ) . In Fig. 2, q isthe first waypoint of T which is reachable from p .Let us add now the vertices and edges related to the BS.For every task vertex ( p, t ) and every BS β add the vertex ( p (cid:48) , t (cid:48) ) and the edge (( p (cid:48) , t (cid:48) ) , ( p, t )) where p (cid:48) and t (cid:48) are thedeparture position and departure time to leave BS β and toarrive at position p exactly at time t . Analogously, add thevertex ( p (cid:48)(cid:48) , t (cid:48)(cid:48) ) and the edge (( p, t ) , ( p (cid:48)(cid:48) , t (cid:48)(cid:48) )) where ( p (cid:48)(cid:48) , t (cid:48)(cid:48) ) denotes the arriving waypoint at BS β departing from po-sition p at time t . Finally, add edges between consecutivewaypoints of a same BS. And, if there are more than oneBS, for every waypoint ( p, t ) in a BS β , add an edge from ( p, t ) to ( p (cid:48) , t (cid:48) ) where ( p (cid:48) , t (cid:48) ) is the first reachable waypointof another BS β (cid:48) .Notice that, for every vertex ( p, t ) in G , its outgoingedges goes toward vertices ( p (cid:48) , t (cid:48) ) such that t (cid:48) > t , thenthe graph G is acyclic. Also, notice that every edge e has anassociated travel time which is the time difference betweenthe connected vertices. If the edge connects consecutivevertices of a same task then this time difference is also thefilming time value of the edge, F T ( e ) . In other case, thefilming time is zero.Let P = [( p , t ) , . . . , ( p m , t m )] be a path in G such that ( p , t ) and ( p m , t m ) are BS vertices. P corresponds to aflight plan if for every BS vertex ( p i , t i ) in P ( i < m ) thenext BS vertex ( p j , t j ) fulfills that t j − t i ≤ b or, in othercase, j = i + 1 and both vertices correspond to the same BS.An approximate solution for the CFP is to find k of suchpaths in G maximizing the sum of the filming time valuesof the traversed edges. Problem 4.1: (DCFP) Given a discretization graph G anda battery capacity b , computes k paths (flight plans) in G such that the filming time is maximized. That is:Maximize (cid:88) e ∈ ∪ ki =1 P i F T ( e ) , where P i denotes the set of edges of the path (flight plan)assigned to the i -th UAV. A. Greedy strategy
In order to alleviate the complexity of the multi-UAVoptimization problem, we propose a greedy strategy. Thismeans that we first solve the problem considering a singleUAV; then, we remove the sub-intervals covered by this UAVfrom the tasks, and solve the remaining tasks with anotherUAV. Applying this approach sequentially, we can obtain anapproximation to the optimal solution with k UAVs.
1) Algorithm for one UAV:
Consider that the batteryconsumption is constant in time (normally is not, but, themiddle rate of battery consumption can be used). Then, ifthe UAV is at vertex u with battery level l , it can traversethe edge ( u, v ) only in the following cases:1) u and v are both vertices of the same base station (theUAV stays at the BS and it is not consuming energy).2) u = ( p, t ) and v = ( p (cid:48) , t (cid:48) ) , and t (cid:48) − t < = l .We can prove the following result: Theorem 4.2:
Given a discretization graph G with param-eter α , and a battery capacity b , the problem DCFP for oneUAV can be solved in O (cid:16) b (cid:0) mnα (cid:1) (cid:80) ni =1 | T i | (cid:17) time by usingdynamic programming, where n and m are the number offilming tasks and Base Stations, respectively. Remark 2:
Notice that the smaller the α value is, the moreaccurate and time consuming the algorithm is.
2) Strategy for a small team of UAVs:
For the case inwhich a small team of UAVs is used, the one-UAV algorithmcan be applied iteratively, updating the graph by removingthe visited arcs. Of course, once the paths are computed,the UAVs would work in parallel, assuming that an avoidingcollision approach is integrated into the vehicles. Althoughour greedy strategy is not optimal in general, it has linearscalability with the number of UAVs. The algorithm for k UAVs has complexity: O (cid:32) kb (cid:16) mnα (cid:17) n (cid:88) i =1 | T i | (cid:33) time . (1)V. S IMULATION E XPERIMENTS
This section depicts some experiments to assess the per-formance of our algorithm for DCFP. Several tests wereperformed to carry out two studies: (i) to validate the qualityof the method; and (ii) to justify that a small team ofUAVs is enough to guarantee optimal filming time in certainscenarios.Notice that the filming time depends on the parallelizationof the tasks and the lengths of the associated time intervals.Obviously, if at some instant t there are x active tasks and k < x then it is not possible to cover all the proposed filmingtasks. And even, if at any time, the number of active tasks isqual to or less than k , it may still not be possible to coverall of them due to battery constraints.In order to measure the quality of the proposed method,we use the coverage ratio , which is the filming time using k UAVs over the sum of the time lengths of all tasks, that is: CR = (cid:80) ki =1 (cid:80) e ∈ P i F T ( e ) (cid:80) ni =1 | T i | .
1) Experiment 1. Coverage ratio vs number of UAVs:
Given two natural numbers n and x , a scenario A isgenerated randomly with n tasks distributed longitudinallyalong a route such that at any time there are at most x active tasks. After that, the developed algorithm is run on A incorporating UAVs one by one until all the tasks arecovered.Random scenarios with n = 10 , , , and x = 4 , , were generated and we repeated the experiment 50 times forevery pair of values n and x . Figure 3 shows the averageresults for n = 20 and x = 4 . For any average case ( n, x ) asimilar behavior was obtained: using k UAVs, with k around x , a coverage ratio around 0.8 is achieved and, for k >x , the greater k is, the lower the increase of the coverageratio is. Moreover, although we do not know the optimalvalue, O , of the coverage ratio using k ≥ x drones, theexperiment shows that our result, R , meets that . ≤ R ≤O ≤ , that is, the approximation factor OR is between 1 and1.25. We experimentally study this factor with an additionalexperiment.Fig. 3: Coverage ratio vs number of UAVs using the averagedata obtained from 50 repetitions of the experiment using n = 20 and x = 4 .
2) Experiment 2. Approximation factor:
We compare theaccuracy of the proposed greedy approach against an ex-haustive method over the graph G . For that, we generatedrandom simulation scenarios using realistic values: targetsmove following a straight line with a variable speed inthe range of 1-2m/s; the minimum and maximum speed ofthe UAVs are 1 and 3m/s, respectively; a full battery isconsidered to last 15 minutes; shooting actions have a length Fig. 4: Optimal versus approximated CR .of at most 80m and their time-lengths are between 30 and70 seconds . With this range of parameters, we generatedrandomly four types of shooting actions: (1) Static , a singlepoint where the UAV stays to film a target or panoramicviews during a time interval; (2)
Chase , a path parallel tothe to target trajectory using the same speed of the target tofollow it from behind during this time interval; (3)
Flyby , apath parallel to the target trajectory but using greater speedin order to film the target from back to front; and (4)
Orbit ,a circular arc that crosses the target trajectory from one sideto another.Due to the huge complexity of the exhaustive approach weused α = 30 s in order to get small discretization graphs. Weused k = 3 UAVs and we limited the temporal overlappingof the tasks to three, that is, there are always at most 3active tasks. We generated several experiments and groupedthe results by the number of generated tasks. Then, wecomputed the average of the coverage ratio in each groupfor the two strategies: our greedy proposal and the optimal exhaustive analysis. The results are shown in Fig. 4. Noticethat our proposal has a very good performance and is muchmore efficient than the exhaustive method, which requiresexponential time.VI. F
IELD E XPERIMENTS
In this section, we describe the integration of our algorithmwith a real team of UAVs performing autonomous mediaproduction.
A. System integration
Figure 5 depicts the block diagram of our software ar-chitecture. All modules have been implemented in C++and integrated with ROS Kinetic Kame . Our planningalgorithm for shooting missions is the core of the systemand runs on the High-Level Planner (HLP). The workflowof the system starts with the Director describing the shootingmission through the
Dashboard , which is an intuitive GUI.We described the Dashboard and the process to transformshooting missions into filming tasks in previous work [1], This values are recommendations from the media experts in MUL-TIDRONE project in line with actual media productions. Code online at https://bitbucket.org/multidrone eu/multidrone full. ig. 5: Block diagram of our system architecture. Theplanning process occurs in a centralized fashion at a groundstation.where we proposed a novel language for the description ofmedia missions. The
Mission Controller (MC) receives themission with the shooting actions and requests the HLP for aplan. Once the plan is computed, the MC sends to each UAVits part of the plan. Then, those associated filming tasks areexecuted on board each UAV in a distributed fashion . TheMC is also in charge of monitoring continuously the statusof the UAVs and asking for re-planning. Thus, if a UAV hasan emergency (e.g., it runs out of battery) and has to stop itscurrent action, the MC would request the HLP for a new planwith the current initial conditions (battery and positions) ofthe available UAVs and the remaining filming tasks.The Path Planner is an auxiliary module so that theHLP can compute collision-free paths (and associated costs)between waypoints. As we endeavor to produce media inoutdoor areas, no-fly zones for the UAVs are also takeninto account (e.g., to avoid the area with the audience).We implemented an A* algorithm for path planning in amap grid containing known no-fly zones and obstacles. Notethat local collision avoidance would still be needed duringmission execution.Fig. 6: UAV platform used for the field experiments. It isequipped with a multimedia camera on a gimbal (bottom).The aerial platforms used in our experiments (see Fig. 6)were three custom-designed hexacopters made of carbonfiber with a size of . × . × . m , kg of maximumtake-off weight and a maximum flight time of minutes.Each UAV is equipped with: an audiovisual camera mountedon a 3-axis gimbal for stabilization; a RTK GPS receiver; The procedure for distributed mission execution is out of the scope ofthis paper.
T=4s T=14s T=35sT=4s T=14s T=35sFig. 7: Snapshots of the videos taken onboard during theEstablish shot (top row, UAV 2), and the Flyby shot (bottomrow, UAV 3). The relative movement of the camera withrespect to the targets can be appreciated.a Pixhawk 2.1 autopilot with ArduPilot; an Intel NUC forrunning onboard navigation algorithms; a Nvidia TX2 tomanage video streaming; and a communication module thatuses both LTE and Wi-Fi technology to communicate withthe modules on the ground and for inter-UAV communica-tion, respectively.
B. Experimental results
We evaluated the whole system for shooting missionplanning and execution during several days in an airfieldlocated close to Seville (see Fig. 1). We created there a mock-up scenario for outdoor media production, including actualcyclists. We tested the system with multiple missions definedby the Director, considering different types of autonomousaerial shots. In this paper, we present results for two rep-resentative experiments that showcase the versatility of ourplanning algorithm. Planning is done on a ground stationbefore the mission starts, taking as input the filming tasksand the current position and battery level of the UAVs. Eventhough our architecture allows for re-planning during missionexecution in case of unexpected events, this was not the casein the included experiments. Moreover, our complete systemcan run visual tracking of the targets on the videostreams topoint the camera gimbal. However, as we were interested inthe planning part in these experiments, the cyclists carrieda GPS receiver to communicate their position to the UAVsand ease camera tracking.The first experiment consists of a shooting mission involv-ing 3 UAVs filming two actual cyclists. The Director specifies3 sequences of shooting actions in parallel, with the samestarting time and duration (70 seconds) per sequence. Thefirst sequence only includes one shot of type Lateral (UAVfollows the target laterally at its same speed). The secondsequence has two consecutive filming tasks, an Establish(UAV approaches the target from behind coming closer indistance and height) and a Chase (UAV follows the targetfrom behind at a constant distance), respectively. The thirdig. 8: Filming tasks (left graphs) and plans computed foreach UAV (right graphs). Plans for each UAV in a differentcolor, with dashed lines indicating UAV navigation and solidlines UAV shooting. Top, first experiment with 3 UAVs.Bottom, second experiment with 2 UAVs.sequence has also two consecutive tasks, a Flyby (UAV startsbehind the target at a lateral distance but it catches up withit to overcome it) and a Static (UAV stays still filming apanoramic view). Figure 7 shows a composition of imagestaken from the UAV cameras during some of the parallelshots are depicted . The target trajectories and the plancomputed can be seen in Fig. 8 (top). As there are 3 availableUAVs with enough battery to cover the mission, the HLPcomputes a plan assigning directly one sequence of shooting A video of the complete experiment can be seen at https://youtu.be/nRM-TJ2njtg actions to each UAV. We used a computer with an Intel [email protected] and 16 GB RAM as ground station to runthe HLP module, obtaining the plan in . ms . All taskswere fully covered by the computed plan ( CR = 1 ).The second experiment consists of a shooting missioninvolving 2 UAVs filming one cyclist. The Director specifiesa single Chase shot ( s ) but long enough to be covered bythe same UAV due to battery limits (we set up shorter batteryendurance in the UAVs to enforce this situation). Therefore,the computed plan (in . ms ) assigns one part of theChase to each of the available UAVs. The plan computed canbe seen in Fig. 8 (bottom) . The first UAV starts covering thetask from the beginning, and then returns to the Base Stationafter running out of battery. The second UAV replaces thefirst one at a relay point and covers the remaining part ofthe task. Due to safety reasons, both UAVs do not arrive inthe relay point at the same time instant, so there is a shorttime interval in between the end of the first filming and thestart of the second. In particular, 8 seconds of the originalshooting action were not cover during the relay operation( CR = 0 . ). VII. C ONCLUSIONS
In this paper, we presented an algorithm for planningcinematography missions with multiple UAVs covering out-door events. The strategy is based on an efficient dynamicprogramming approach for one UAV, that is used in aniterative way to produce an approximate solution for themulti-UAV problem. Although the algorithm is deterministicand assumes a motion model to predict targets’ trajectories,our final architecture allows us to monitor mission executionand re-compute new plans online in case of contingencieslike UAV failures or deviations from the original plan.Results in simulation proved that using a number ofUAVs equal to the number of overlapping tasks, 80% ofthe total filming time can be covered, which somehowjustify our assumption of small teams. We also demonstratedempirically that our approximate solutions are close to theoptimum. Moreover, we present field experiments to showthe applicability of our approach for media production inrealistic cycling race scenarios.Future work will focus on considering a constrained modelthat minimizes the number of UAVs per task. It is reasonableto impose that a UAV leaves a task only if it is running out ofbattery or the end of task is reached. Thus, we try to reducethe complexity of the optimization problem pursuing exactoptimal solutions for the multi-UAV case.A
CKNOWLEDGMENT
This project has received funding from the Spanish Min-istry of Economy and Competitiveness project (MTM2016-76272-R AEI/FEDER,UE) and the European Union’s Hori-zon 2020 research and innovation programme under No731667 (MULTIDRONE) and Marie Skłodowska-Curiegrant agreement No 734922. This publication reflects the A video of the complete experiment can be seen at https://youtu.be/-8Y8OGbHE9c uthors’ views only. The European Commission is not re-sponsible for any use that may be made of the informationit contains. R
EFERENCES[1] ´Angel Montes-Romero, Arturo Torres-Gonz´alez, Jes´us Capit´an, Maur-izio Montagnuolo, Sabino Metta, Fulvio Negro, Alberto Messina, andAn´ıbal Ollero, “Director tools for autonomous media production witha team of drones,”
Applied Sciences , vol. 10, no. 4, 2020.[2] N Joubert, J. L. E, D. B. Goldman, F Berthouzoz, M Roberts,J. A. Landay, and P Hanrahan, “Towards a Drone Cinematographer:Guiding Quadrotor Cameras using Visual Composition Principles,”
ArXiv e-prints , 2016.[3] C. Gebhardt, B. Hepp, T. N¨ageli, S. Stevˇsi´c, and O. Hilliges, “Airways:Optimization-Based Planning of Quadrotor Trajectories according toHigh-Level User Goals,” in
Proceedings of the 2016 CHI Conferenceon Human Factors in Computing Systems - CHI ’16 . ACM, 2016, pp.2508–2519, ACM Press.[4] Rogerio Bonatti, Cherie Ho, Wenshan Wang, Sanjiban Choudhury, andSebastian Scherer, “Towards a robust aerial cinematography platform:Localizing and tracking moving targets in unstructured environments,”2019.[5] Tobias N¨ageli, Lukas Meier, Alexander Domahidi, Javier Alonso-Mora, and Otmar Hilliges, “Real-time planning for automated multi-view drone cinematography,”
ACM Transactions on Graphics , vol. 36,no. 4, pp. 1–10, jul 2017.[6] E. Nunes, M. Manner, H. Mitiche, and M. Gini, “A taxonomy for taskallocation problems with temporal and ordering constraints,”
Roboticsand Autonomous Systems , vol. 90, pp. 55–70, 4 2017.[7] B. Kartal, E. Nunes, J. Godoy, and M. Gini, “Monte carlo tree searchwith branch and bound for multi-robot task allocation,” in
The IJCAI-16 Workshop on Autonomous Mobile Service Robots , 2016.[8] E G Jones, M B Dias, and A Stentz, “Time-extended multi-robotcoordination for domains with intra-path constraints,”
AutonomousRobots , vol. 30, no. 1, pp. 41–56, 2011.[9] Ernesto Nunes, Mitchell McIntire, and Maria Gini, “Decentralizedmulti-robot allocation of tasks with temporal and precedence con-straints,”
Advanced Robotics , vol. 31, no. 22, pp. 1193–1207, 11 2017.[10] L Luo, N Chakraborty, and K Sycara, “Distributed Algorithms forMultirobot Task Assignment with Task Deadline Constraints,”
IEEETransactions on Automation Science and Engineering , vol. 12, no. 3,pp. 876–888, 2015.[11] O. Aichholzer, L. E. Caraballo, J.M. D´ıaz-B´a˜nez, R. Fabila-Monroy,I. Ventura, and B. Vogtenhuber, “Scheduling aerial robots to coveroutdoor events,” in