Towards Decentralized Human-Swarm Interaction by Means of Sequential Hand Gesture Recognition
TTowards Decentralized Human-Swarm Interaction by Means ofSequential Hand Gesture Recognition
Zahi Kakish, Sritanay Vedartham, and Spring Berman
Abstract — In this work, we present preliminary work on anovel method for Human-Swarm Interaction (HSI) that canbe used to change the macroscopic behavior of a swarm ofrobots with decentralized sensing and control. By integrating asmall yet capable hand gesture recognition convolutional neuralnetwork (CNN) with the next-generation Robot OperatingSystem ros2 , which enables decentralized implementation ofrobot software for multi-robot applications, we demonstratethe feasibility of programming a swarm of robots to recognizeand respond to a sequence of hand gestures that capable ofcorrespond to different types of swarm behaviors. We test ourapproach using a sequence of gestures that modifies the targetinter-robot distance in a group of three Turtlebot3 Burgerrobots in order to prevent robot collisions with obstacles. Theapproach is validated in three different Gazebo simulationenvironments and in a physical testbed that reproduces oneof the simulated environments.
I. INTRODUCTIONAs the price of low-power computing devices continues todecrease, the robotics community recognizes the increasingviability of robotic swarms for a multitude of applications.The emphasis on scalability and hardware redundancy inrobotic swarms, however, makes developing robust and stablemanagement and control tools difficult. In addition, a human-in-the-loop approaches for managing a fleet of robots leadsto cognitive overload by operators impairing their abilityto continue working [1]. Our work seeks to solve thisissue in Human-Swarm Interfaces (HSI) by attempting toremove the interface. Instead, a human operator may givecommands through a sequence of hand gestures and havetheir commands relayed to an entire swarm or team of robots.Yet before this can happen, further progress is requiredin Human-Swarm interfaces and control. Past work in thefield has expanded upon control using a variety of differentcontrol interfaces which are often referred to as Human-Swarm Interfaces. Kolling et al. provide an essential surveyof the HSI field as it currently stands [2]. As mentionedearlier in [1], one of the most difficult aspects of human-swarm control is burdening the user with cognitive overload.Managing multiple robots is a difficult task, and, therefore,HSI systems are designed with this premise. Abstractingcomplex swarm behavior is essential to discretizing thetasks into manageable and comprehensible feedback to the
This work was supported by the Arizona State University Global SecurityInitiative.Zahi Kakish and Spring Berman are with the School for Engineering ofMatter, Transport and Energy, Arizona State University (ASU), Tempe, AZ85281, USA { zahi.kakish, spring.berman } @asu.edu Sritanay Vedartham is a high school student at BASIS, Scottsdale, AZ85259, USA [email protected] human user. Lin et al. were able to generate artificial taskfunctions capable of abstracting a swarm in real-time to assistusers [3]. After abstracting complex swarm behavior, [4] and[5] mapped the different abstracted behaviors onto singlegesture inputs for tablet interfaces. However, both thesemethods require a centralized server to function because oftheir robot’s low computational power, and they require thedifficult development of an external mobile application.Various approaches have been attempted to control multi-robot systems by other physical means, including wear-able devices using haptic feedback as seen in [6][7] andElectroencephalogram brain-computer interfaces (BCI) asreported by Karavas et al. [8]. However, both these ap-proaches are hindered by the necessity of wearable hardwareto function and cumbersome application process to readnoisy EEG signals, respectively. These issues make the theirapplicability in everyday usage more difficult.In this work, we present preliminary research into ahuman-swarm control paradigm capable of future use indecentralized control of a robotic swarm through humaninput that is realizable with the current generation of com-puter hardware. By decentralized, we mean that the robots’systems are independent of one another but are capable ofinter-communication. With the advent of accelerated trainingin machine learning and advances in computer vision, manycomplex classification tasks are scalable to less computation-ally powerful devices. We seek to utilize this by combiningadvances in decentralized robotic systems using ros2 [9] withspecialized convolutional neural networks (CNN) capable ofrecognizing hand gestures. We hypothesize that concatenat-ing a series of different hand gestures produces more refinedcontrol of a particular swarm task specified by a user.In Section II, we establish a simplified system composedof a CNN and simplified swarm behavior to validate ourhypothesis. Next, we test our approach using a simplifiedswarm behavior on a small swarm size (three robots) invarious simulation testbeds. The system is further validatedthrough physical experimentation using three robots in a re-creation of the final simulated testbed. Finally, we expandupon the results of the experiments and discuss the futureapplicability of this approach towards a decentralized swarmcontrolled by a human-in-the-loop. As of time of writing,this is the only known work that formulates sequential handgestures for controlling multiple robots using CNNs in asemi-decentralized manner. a r X i v : . [ c s . R O ] F e b I. METHODOLOGYOur system is primary composed of two parts: (1) adeep neural network classifying various hand gestures, and(2) a simplified swarm cohesion model for a small swarmsize. This paper will provide a validation of sequential handgestures as a means of controlling a swarm. Therefore, wedo not formulate the algorithm as a generalized model forscalable swarm sizes, which we leave to future work.
A. Deep Neural Network
Since low-powered, small computationally powerful de-vices, such as the Raspberry Pi and other
ARM devices,are common among swarm robotics hardware architecture,most conventional and state-of-the-art neural network modelsare unusable or difficult to implement. To circumvent thisissue, a special neural network named a SqueezeNet wasdeployed for hand gesture recognition [10]. The SqueezeNetmodel is capable of performing as well, or even surpassing,classification rates of many the most popular model typeswith only a fraction of the parameters, thus reducing themodel size to roughly ∼
1) Dataset and Preprocessing:
A training set consistingof six different gestures and 2 ,
956 images was utilizedfor training the SqueezeNet model [11]. The images aresilhouettes of the gestures in black and white. A gestureset, G , containing all possible gestures for classification andavailable in the training data is defined as: G = { C , Fist , L , Ok , Peace , Palm } (1)Figure 1 contains examples of the gestures found in thedataset. The dataset was expanded to ensure more robustclassification of gestures during real-time operation. The dataaugmentation began with inverting the images horizontally toresemble the gesture but with the opposite hand. The originalimage and the inverted image are used to generate three newimages, respectively. One rotated by 90 ◦ clockwise, another90 ◦ counter-clockwise, and an image flipped upside-down.Therefore, seven new images are generated for each imagein the original dataset.To ensure that the model is capable of real-time perfor-mance, a seventh gesture was added to the dataset: ‘None’.Since the user’s hand is not consistently in the frame, the im-age is classified as ‘None’ and awaits a gesture to come intothe frame. A dataset containing blank images is generatedduring preprocessing consisting of black images and blackimages containing Gaussian noise to add to SqueezeNetmodel’s classification capabilities.Further preprocessing was performed before training toensure optimal results. Images in the dataset were croppedfrom their original size of 640 ×
576 to 570 × × Fig. 1. Silhouettes of gestures available through the dataset in [11]. Inaddition to these gestures, a blank image was used to classify the gesture
None . A) a C shape B) a Fist
C) an L shape D) the Okay sign E) a
Palm
F) the
Peace sign.
2) Training:
The SqueezeNet model was programmedand trained using the
TensorFlow
API version 1 .
14 [12]running on a development box containing four
Nvidia
RTX2080 graphics cards. The model was trained for 10 epochswith a Stochastic Gradient Descent optimizer, which had alearning rate α = . η = . λ = . B. Gesture Driven Controller
We first consider a waypoint system in which a finitenumber of robots, N , plot a straight motion to a desired goalposition ( x d , y d ) N from an initial starting position ( x i , y i ) N . Tohelp validate the applicability of sequential gesture control,we will only focus on one direction, x , and keep the y values arbitrary. For example, if the initial position of a robotis ( , ) and the desired position is set to be ( , ) , thedesired position is considered achieved if only the robot’sx-coordinate matches the desired value. A set of waypoints, W , for each robot n is thus generated from augmenting x i by x offset , which is calculated from a user-defined number ofwaypoints, M . x offset = | x i − x d | M (2) W n = { ( x i + k × x offset , y ) : k = , . . . , M } (3)Each robot, n , moves from one waypoint to another in theset W until it reaches the final desired position ( x d , y d ) n .With the waypoints generated, we define C : y n × β → R to be the cohesion of the swarm, that is, a metric measuringthe inter-robot distance of a swarm, where y n is the robot n ’s y position and β is the cohesion factor. β is a binaryvariable that represents an increase in cohesion if β = β =
1. To increase the cohesion ofa swarm is to reduce the distance between each robot, andto decrease the cohesion is to increase the distance betweeneach robot. Additionally, we consider a small swarm size ofthree agents in a line separated by an offset in the y direction.he cohesion of this small swarm is best represented by aconstant addition or subtraction of a small, static offset y offset .An individual robot, n , applies a cohesion change by thefollowing piecewise formula: C ( y n , β ) = y n − y offset if y n > β = y n + y offset if y n > β = y n + y offset if y n < β = y n − y offset if y n < β = x and y are the robot’s position and φ is its angle.˙ x = ˙ x ˙ y ˙ φ = v o cos φ v o sin φω (5)Here, v o and ω is the robot’s linear and angular velocity,respectively. Since we choose to apply a constant linearvelocity, the dynamics of the robot using the unicycle modelare represented primarily by ω . To control the robot’s motionfrom one point to another reference point, the differencebetween a desired angle, φ d , and the robot’s current angle, φ , is calculated and used as the error e = φ d − φ . (6)To prevent singularities from arising and restrict the errorbetween 0 and 2 π , we update the calculated error using atan2e new = atan2 ( sin ( e ) , cos ( e )) . (7)Finally, our control input into the robot is applied with aproportional gain K p . ω = K p ˙ e new (8)This forms the basis for the control of the swarm usingsequential hand gestures.III. SIMULATIONSA series of 3D simulations were designed to test theviability of sequential gesture-based control of a swarm.To formalize the experimental simulations, the followingassumptions were made. First, the simulations are limited tothree robots. Though it may seem like a small swarm size, webelieve that this size provides a viable minimal benchmarkfor how our algorithm will actually work. Second, odometryis provided by the simulation environment and not by internal(i.e. encoders) or external (i.e. GPS, cameras, etc.) sensors.Finally, a black background is used when reading imagesfrom the gestures to streamline classifications. A. Structure
The simulation was developed using ros2 , the next-generation of the Robot Operating System (ROS). Comparedto the original ROS, ros2 provides an enhanced middlewareprogramming environment, the removal of a ROS Masterto make multi-robot decentralized approaches easier, andexpanded platform and architecture support.
Gazebo , a 3Drobot simulation environment, is used to render the ex-periment. Gazebo is a project developed in tandem withROS/ros2, and comes with many ROS drivers to simplify de-velopment of robots by providing a simulation environmentthat functions similarly to a physical one. The experimentis run using the Turtlebot3 Burger robot by
ROBOTIS [14]who provide 3D Gazebo models for use in simulation.Each robot runs two separate nodes: one that containsdrivers to connect to Gazebo to simulate differential drivecontrol, sensor readings, etc., and another that is the primarymotion controller as defined in Section II-B. The robots donot intercommunicate and only subscribe to messages fromone external node that tells them what gestures were classi-fied. This external node reads information from a generic we-bcam running the SqueezeNet model. Additionally, imagescaptured by the webcam are modified to reflect those usedfor training, which were silhouette images of the differenthand gestures.The captured camera images are cropped from the 640 ×
480 resolution to 480 ×
480 resolution and then scaled to240 × B. Gestures
For this experiment, the number of gestures used havebeen reduced to the following five: G possible = { Palm , Peace , Fist , C , L } (9)These gestures are mapped to the subsequent actions theswarm may undergo: • Palm : Stop movement of the swarm • Peace : Resume movement of the swarm • Fist : Read cohesion action • C Sign : Increase • L Sign : DecreaseAdditionally, there is a sixth classified gesture is
None ,which, as stated in Section II-A.1, means that there is nogesture recognized and no swarm action applicable.
Palm (Stop) and
Peace (Resume) are the only two gestures capableof controlling the swarm alone. The rest require the user togive a sequence of gestures for the swarm to read. One agesture pertaining to the swarm behavior (cohesion) the userishes to modify and the next is a gesture mapped to amodification variable (increase or decrease).In the simulation, we will rely on two sequences usedto modify the cohesion of the swarm to help negotiate anobstacle. The first is increasing the cohesion of the swarm, β =
0, which means that the swarm will group closer to oneanother. This is done by giving the following commands inthis order:
Palm → Fist → C → Peace (10)This sequence is explained as “stop the swarm, readmy cohesion command, increase cohesion by one step size,and resume moving.”
The second would be to decrease thecohesion of the swarm, β =
1, resulting in a increase indistance between the robots. This is done using the samehand gestures but with the decrease cohesion command.
Palm → Fist → L → Peace (11)Just like the increase cohesion sequence, this sequence issimilarly explained as “stop the swarm, read my cohesioncommand, decrease cohesion by one step size, and resumemoving.”
As described earlier, the cohesion of the swarm isset as steps. Specifically, each call to increase or decrease theswarm cohesion decreases or increases the distance betweenthe robots by a calculated y offset , respectively. If an obstaclerequires the user to change the swarms cohesion by multiplesteps of y offset , the user does not need to repeat the wholesequence twice but can concatenate the swarm commandmultiple times. For example, if the swarm cohesion is neededto increase by two steps, a user would provide the followingcommand: Palm → Fist → C → Fist → C → Peace (12)
C. Simulated Environment
The three robots are placed in a line within three simulatedtestbeds, shown in Figure 2. The first contains a series of twotypes of openings: one small-sized opening in the middleand two intermediate-sized openings located on both endsof the testbed. The second testbed has only one small-sizedopening. Each testbed provides a different validation for thecapability of our sequential gesture control scheme. The firstdemonstrates how individual commands can be given at theonset of an obstacle, and the other shows how for moredifficult obstacles a user is capable of stringing togethermultiple gesture actions into one command. The first twotestbeds are 8 m × m in size. Each robot is placed 1 m fromthe end of the testbed and spread to have an inter-robotdistance of 1 . m between one another. To complete eachtask, the robots will have to move forward and reach theother end of the testbed. The swarm of robots will need tonegotiate the obstacles before them by relying on a user’ssequential gesture input.The third testbed is a recreation of a real testbed used inthe physical experiment section of the paper. Compared to Fig. 2. The three testbeds created for simulation. A) An 8 m × m testbedcontaining multiple openings for the swarm to traverse through. B) An 8 m × m testbed containing only one small opening for the agents to negotiate.Compared to the previous testbed, this requires the user to string togethermultiple sequences of the same gesture to complete. C) A small 2 . m × . m testbed with one small 1 m wide opening. This last testbed is recreated forphysical validation in Section IV the large surface area of the other testbeds, this testbed issignificantly smaller at 2 . m × . m . Additionally, the initialinter-robot distance is reduced to 0 . m and the robots areplaced 0 . m from the end. The testbed contains a 1 m sizedopening a meter from the robots starting position. Like theother testbeds, the robots will have to negotiate this obstacleusing a single sequence of gestures to increase cohesion.After going through the obstacle, a different sequence isused to decrease the cohesion. This testbed’s results actas validation for the success of this experiment using realrobots. Figure 3 provides a brief overview of the robotsincreasing and then decreasing their cohesion to negotiatea small opening. ig. 3. An overview of the physical experiment with gesture sequences required for the swarm to get through the small opening in the middle of thearena. A) The robot swarm begins on one end of the testbed. The Peace gesture is used as a standalone command to start the experiment. B) Once therobots get closer to the obstacle, the following sequence (
Palm → Fist → C → Peace ) is given to increase the cohesion of the swarm and resume motion. C)After the robots have cleared the opening, another gesture sequence (
Palm → Fist → L → Peace ) is used to return the swarm back to their initial cohesionsize. D) Finally, the robots reach the final waypoint and the experiment completes.Fig. 4. The physical testbed.
IV. PHYSICAL EXPERIMENTSTo validate the use of sequential hand gesture control ofa swarm in a physical setting, an analogous physical testbedto the third simulated testbed was built as shown in Figure4. The experiment was set up in the same manner as thesimulated one. For this paper, the individual robot controller and gesture recognition code was written with the intentionof interchangeability between the simulated and physicalexperiments. The only difference were parameter files whichprovided environmental constraints and individual robot at-tributes depending on the environment the test is being run.However, certain aspects of the simulation were notavailable for use in our physical experiments. Odometrywithin simulation is accurately calculated by the Gazeboenvironment, but getting this same information in a physicalexperiment required use of an overhead camera and
ArUco fiducial markers [15] [16] to calculate each individual robot’spose. A ros2 node was developed to track individual robotposition from the overhead camera and calculate poses froma 0 . m × . m marker placed atop each robot that wasdetected. This odometry information is then published to therobots for use in their controller node’s.During simulation, all the ros2 individual robot nodes andgesture node were run on the same computer. The physi-cal experiment distributes the computation over a wirelessnetwork. Each robot runs their respective controller on theirhardware, but do not communicate to one another. The onlycommunication they receive are from two external sources:he node calculating individual robot odometry from a centraloverhead camera and the node reading and classifying thesequential gestures from the user. Each of these nodes arealso run on separate computers due to convenience ratherthan the inability of running both on the same one. Thisdecentralization comes with a cost, however, in the form anapproximately 0 . s delay. Even though the individual robotswere capable of running the gesture detection node, we choseto keep the node running on a separate computer to keep thetest runs between the simulation and physical experimentssimilar.The physical experimental procedure is nearly identicalto the simulation except for the software structure changespresented in this section. There are a few small changes incomparison to the simulation; however, we do not believeit reduces the validity of our physical experiment. Oneadditional change to this experiment was the linear velocityof each robot was reduced due to the half-second networklag in the system. V. DISCUSSIONWe have successfully demonstrated our hypothesis byshowcasing a simplified cohesion control model for bothsimulated and physical testbeds. The supplemental videoattached provides the results of a single run on each sim-ulated and physical testbed. Each simulated test completedsuccessfully and the controller responded correctly to theproperly classified sequential gesture commands given fromthe SqueezeNet model in real-time. As mentioned in SectionIII-A, the run corresponding to Testbed 2 demonstrated theability for the systems to read multiple instances of the same increase cohesion gesture sequence in one input. Resultsfrom that run show that the provided input sequence waseasily registered and enacted by the robots. Additionally, thephysical experiment was able to finish successfully even withthe network delay present in the system. We believe that re-creations of bigger testbeds, such as Testbed 1 and 2 in oursimulations, would yield successful runs. Although the sys-tem is semi-decentralized due to odometry calculations andhaving the gesture recognition node running separate fromthe robots, these results of these tests show the feasibilityof a human operator interacting with a decentralized robotswarm by showing a robot a sequence of hand gestures.Although all the experiments were successful, we didrun into a minor classification issue during test runs. TheSqueezeNet CNN would sporadically misclassify the handgesture upon the subject’s hand leaving the camera’s viewingarea or when switching between hand gestures. We believethat this issue is likely due to gestures created during thehand’s motion unable to be classified. To help reduce thiserror, publishing of the predicted gesture was limited to onceevery half second instead of one every tenth of a second,which is the refresh rate of the ros2 node that classified thegestures. VI. FUTURE WORKAs stated in the beginning of Section II, the experimentsshown were not generalized, but future work hopes toencompass multiple, generalized swarm behaviors. Researchby Harriott et al. gives an interesting breakdown of numerousbiologically-inspired swarm behavior metrics [17] as possiblefuture behaviors employable by our system. A further ex-panded repertoire of behaviors would increase the feasibilityof this system for everyday work with human operators.Although this light-weight SqueezeNet model is capableof being run on low-powered hardware such as the RaspberryPi due to its size (approximately ∼ GoogleAI has developed algorithms for more robust hand gesturerecognition without the need for silhouette images capable ofrunning on Android and iOS devices [18]. These algorithmsare capable of running on hardware similar to that of theTurtlebot3 and is able to classify a larger set of hand gestures.Our future work seeks to explore this and other algorithmscapable of recognizing hand gestures in more dynamic andcluttered backgrounds.Finally, we intend this work to help us pursue human-swarm interaction capable of rewarding a swarm for theactions or tasks it accomplishes. Specifically, a form of dy-namic on-line reinforcement learning (RL) on a decentralizedswarm. Work by Nam et al. show the feasibility in applyingRL to predict or learn a human operators “trust” is in theswarm they are operating [19] [20]. We hope that futurework allows more unique and novel methods for human andswarm to learn from one another in different environments.VII. CONCLUSIONOur paper provides preliminary work into using a se-quence of hand gestures to control swarm behavior. Thissystem is a combination of a small sized CNN model capableof recognizing silhouette images of hand gestures in real-timeand a decentralized robot development environment, ros2 .We test this control strategy in a semi-decentralized mannerin three simulated testbeds using three Turtlebot3 Burgerrobots. The system is further tested on a physical testbedwith the same robots. Both environments yielded successfulruns using a single swarm metric (cohesion of the swarm).In the future, we intend to expand upon this work and createa fully realized, decentralized swarm system controllable bya human operator with no external equipment by giving asequence of commands to control the whole swarm.ACKNOWLEDGMENTThe authors would like to thank Shiba Biswal (ASU),Karthik Elamvazhuthi (UCLA), and Varun Nalam (ASU) fortheir assistance in this work.
EFERENCES[1] Jessie Y. C. Chen and Michael J. Barnes. Human–Agent Teaming forMultirobot Control: A Review of Human Factors Issues.
IEEE Trans-actions on Human-Machine Systems , 44(1):13–29, February 2014.[2] Andreas Kolling, Phillip Walker, Nilanjan Chakraborty, Katia Sycara,and Michael Lewis. Human Interaction With Robot Swarms: A Survey.
IEEE Transactions on Human-Machine Systems , 46(1):9–26, February2016.[3] Chao-Wei Lin, Mun-Hooi Khong, and Yen-Chen Liu. Experimentson Human-in-the-Loop Coordination for Multirobot System WithTask Abstraction.
IEEE Transactions on Automation Science andEngineering , 12(3):981–989, July 2015.[4] Nora Ayanian, Andrew Spielberg, Matthew Arbesfeld, Jason Strauss,and Daniela Rus. Controlling a team of robots with a single input.In , pages 1755–1762. IEEE, May 2014.[5] Yancy Diaz-Mercado, Sung G. Lee, and Magnus Egerstedt. Distributeddynamic density coverage for human-swarm interactions. In , pages 353–358. IEEE, July2015.[6] Selma Music, Gionata Salvietti, Domenico Prattichizzo, and SandraHirche. Human-Multi-Robot Teleoperation for Cooperative Manipu-lation Tasks using Wearable Haptic Devices.
IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , page 8, 2017.[7] Eduardo Castell´o Ferrer. A wearable general-purpose solution forHuman-Swarm Interaction. In
Proceedings of the Future TechnologiesConference , pages 1059–1076. Springer, 2018.[8] George K. Karavas, Daniel T. Larsson, and Panagiotis Artemiadis. Ahybrid BMI for control of robotic swarms: Preliminary results. In , pages 5065–5075, Vancouver, BC, September 2017.IEEE.[9] Open Source Robotics Foundation. ros2, 2019.[10] Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf,William J Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracywith 50x fewer parameters and < arXiv preprintarXiv:1602.07360 , 2016.[11] Brenner Heintz. Training a neural network to detect gestures withopencv in python, 2018.[12] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, ZhifengChen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean,Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp,Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz,Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, RajatMonga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster,Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, PaulTucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, OriolVinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu,and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning onheterogeneous systems, 2015. Software available from tensorflow.org.[13] Ricardo Carona, A Pedro Aguiar, and Jose Gaspar. Control of unicycletype robots tracking, path following and point stabilization. 2008.[14] ROBOTIS. Turtlebot3, 2019.[15] Francisco Romero Ramirez, Rafael Mu˜noz-Salinas, and RafaelMedina-Carnicer. Speeded up detection of squared fiducial markers. Image and Vision Computing , 76, 06 2018.[16] Sergio Garrido-Jurado, Rafael Mu˜noz-Salinas, Francisco Madrid-Cuevas, and Rafael Medina-Carnicer. Generation of fiducial markerdictionaries using mixed integer linear programming.
Pattern Recog-nition , 51, 10 2015.[17] Caroline E. Harriott, Adriane E. Seiffert, Sean T. Hayes, and Julie A.Adams. Biologically-Inspired Human-Swarm Interaction Metrics.
Proceedings of the Human Factors and Ergonomics Society AnnualMeeting , 58(1):1471–1475, September 2014.[18] Valentin Bazarevsky and Fan Zhang. On-device, real-time handtracking with mediapipe, august 2019.[19] Changjoo Nam, Phillip Walker, Huao Li, Michael Lewis, and KatiaSycara. Models of Trust in Human Control of Swarms With VariedLevels of Autonomy.
IEEE Transactions on Human-Machine Systems ,pages 1–11, 2019.[20] Changjoo Nam, Phillip Walker, Michael Lewis, and Katia Sycara.Predicting trust in human control of swarms via inverse reinforcementlearning. In2017 26th IEEE International Symposium on Robot andHuman Interactive Communication (RO-MAN)