On Multi-Human Multi-Robot Remote Interaction: A Study of Transparency, Inter-Human Communication, and Information Loss in Remote Interaction
NNoname manuscript No. (will be inserted by the editor)
On Multi-Human Multi-Robot Remote Interaction
A Study of Transparency, Inter-Human Communication, andInformation Loss in Remote Interaction
Jayam Patel · Prajankya Sonar · CarloPinciroli
Received: / Accepted:
Abstract
In this paper, we investigate how to design an effective interface forremote multi-human multi-robot interaction. While significant research existson interfaces for individual human operators, little research exists for the multi-human case. Yet, this is a critical problem to solve to make complex, large-scale missions achievable in which direct human involvement is impossible orundesirable, and robot swarms act as a semi-autonomous agents. This paper’scontribution is twofold. The first contribution is an exploration of the designspace of computer-based interfaces for multi-human multi-robot operations. Inparticular, we focus on information transparency and on the factors that affect inter-human communication in ideal conditions, i.e., without communicationissues. Our second contribution concerns the same problem, but consideringincreasing degrees of information loss , defined as intermittent reception of datawith noticeable gaps between individual receipts. We derived a set of designrecommendations based on two user studies involving 48 participants.
Keywords
Information Transparency · Inter-Human Communication · Information Loss · Remote Interaction · Multi-Human Multi-Robot Interaction
J. PatelWorcester Polytechnic InstituteE-mail: [email protected]. SonarWorcester Polytechnic InstituteE-mail: [email protected]. PinciroliWorcester Polytechnic InstituteE-mail: [email protected] a r X i v : . [ c s . R O ] F e b Jayam Patel et al.
Robot swarms promise solutions for missions in which direct human involve-ment is either impossible or undesirable, such as search-and-rescue, firefight-ing, planetary exploration, and ocean restoration [57]. When robot swarms aredeployed to perform complex missions, autonomy is only part of the picture.Along with autonomy, it is equally important for human operators to moni-tor and affect the behavior of the swarm. This creates the issue of designingeffective solutions for remote interaction between humans and robot swarms.While a significant body of work exists in remote interaction involving sin-gle humans and one or more robots, the scenario in which multiple humansinteract with a robot swarm has received little attention. In this paper, we ar-gue that it will be common for multiple humans to cooperate in the supervisionof robot swarms. First, the amount of information generated by robot swarmsis likely to exceed the span of apprehension of any individual operator [56],even when considering highly skilled ones such as video gamers. Cooperationamong human operators would make monitoring more efficient. Second, theinvolvement of multiple humans allows for improved flexibility in robot controland task assignment, an important advantage in complex operations.However, the involvement of multiple humans comes with old and newchallenges. Among the old, we highlight the need for information transparency ,which is the ability of the interface-swarm system to convey useful data forthe operators to understand and modify the status of the swarm [83,11,71,3,7,78]. Multiple operators also create the new challenge of conveying intentionsand actions to other operators, i.e., effective inter-human communication , forbetter cooperation and conflict mitigation [77]. Inter-human communicationcan be either direct or indirect . Direct communication includes verbal and non-verbal communication (e.g., gestures) [30]. Indirect communication is mediatedthrough the remote interface (e.g., a graphical user interface on a laptop ortablet). Effective indirect communication requires inter-operator transparency ,which pushes for interface designs that make it simple for operators far awayfrom each other to exchange information on their intentions and plans [5,51,9,83,71,3].In this paper, we explore the design space of remote interfaces for multi-human multi-robot interaction. We study the role of direct and indirect com-munication among operators, and investigate how to achieve high levels ofinformation and inter-operator transparency through several variants of ourinterface. The result of this work is a set of recommendations on which designelements contribute to making a remote interface effective. This part of ourstudy builds upon previous work [61] in which we investigated transparencyand inter-human communication on the performance of human operators in proximal interaction . Proximal interaction occurs when humans and robotsshare the same environment.Remote interaction allows us to study another important aspect—the roleof information loss . In this paper, we consider information loss as a decrease inthe frequency of the visual information presented to the operators. We measure n Multi-Human Multi-Robot Remote Interaction 3 information loss as the time interval, measured in seconds, between the deliveryof consecutive video frames (the inverse of frames per second). Packet loss,bandwidth limitations, and geographical distance between the locations of theoperators and the robots act as causal factors for information loss. Informationloss leads to degraded operator performance, lack of awareness and trust, andincrease in cognitive workload [20].The last factor we consider in our study is that, in presence of non-idealcommunication, it is also likely that the operators experience heterogeneous levels of information loss, causing a disparity in workload and situationalawareness across operators.The main contributions of this paper can be summarized as follows: – We provide an extensive investigation of the design space of remote inter-faces for multi-human multi-robot interaction. We consider factors such asdirect and indirect communication, information and inter-operator trans-parency, and homogeneous and heterogeneous information loss. – We compile a set of design recommendations validated through a user studythat included 48 participants. We implemented a highly configurable re-mote interface that incorporates these recommendations and enables futurestudies of this kind.This paper is organized as follows. We discuss related literature on remotehuman-robot interaction in Sec. 2. In Sec. 3, we discuss the design of ourconfigurable remote interface. We report the results of our user study in idealconditions in Sec. 4. We then introduce different types of information lossand report the results of a dedicated user study in Sec. 5. We summarize ourcontributions and outline directions for future work in Sec. 6.
Remote robot control and manipulations has been a field of interest since Go-ertz and Thompson laid the foundation of modern tele-operation [26]. Thefield has mostly focused manipulators [29,48,80,34,47] rather than on mobilerobots. This body of research has contributed advancements in tele-presence [23,39,17,40], tele-robotics [59,66], tele-operation [33,52,72,54,31], and tele-surgery[64,73,15]. This research has focused on identifying suitable interfaces and im-proving their usability [42,50,70,68,58,22,82,32], as well as proposing novelcontrol architectures for these interfaces [12,18,45,44]. Chen et al. [10] cat-egorize existing research according to the factors that affect remote controlof robots. These factors are field of view, system orientation, camera view-points, depth perception, degraded video quality, time delay, and camera mo-tion. Building upon this work, Feth et al. [24] and Kim et al. [37,38] presenta shared control framework to allow multiple operators to interact with ma-nipulators. Lee et al. [19] extend these shared control frameworks to studythe impact of information delay on the performance of human operators. Intheir work, the authors incorporate a passivity-based controller to counter-act the negative effects of information delay on operator’s performance. These
Jayam Patel et al. works are limited to interface design for remote interaction with industrialmanipulators, and their findings may not be applicable to remote interface formanipulating numerous mobile robots. To the best of my knowledge, our studyis the first study that investigates the impact of transparency and inter-humancommunication on a multi-human multi-robot interaction.Loss of information has been recognized as a key factor in the performanceand engagement of human operators [10,53,20,43,75,67,16,81,55,8]. Researchsuggests that the effect of information loss and the ability to handle the lossmay vary according to the tasks and the interface to interact with the system.To overcome the degradation in performance, there are three methods to miti-gate the effects of loss on the performance of human operators. These methodsare adopting passivity-based control methods [46,80,12,34,29], predictive dis-plays [35,2,6,13,14,60,74,36,69] and higher-granularity of control [62,1,41].However, these studies are limited to the scenario in which a single operatorinteracts with one of more robots. Our study furthers this line of researchby providing an extensive investigation of the factors that affect the designof remote interfaces for multi -human multi-robot interaction in presence ofinformation loss.
In this section, we present the main features of our remote interface and thebehavior of the robots. At its essence, our interface is a web-based client-server architecture. The server runs ARGoS [65], a fast multi-robot simulator,on a node offered by Amazon Web Services . The server is implemented asa visualization plugin that accepts multiple connections from the clients. Theclient side is a web application implemented with Node.js and WebGL whichoffers similar features with respect to the original graphical visualization ofARGoS. A diagram of the client-server architecture is reported in Fig. 1 anda screenshot of the web interface is shown in Fig. 2. The source code of thesystem is available online as open source software. The process starts when a user performs a command on the client. Theweb interface allows the user to operate at multiple levels of granularity. In ourprevious work [63], we found that mixed granularity of control offers superiorusability in complex missions that require both navigation and environmentmodification. Similarly to [63], in this paper we focus on a collective transportscenario due to the compositional nature that this kind of task presents —collective transport combines navigation, task allocation, and object manipu-lation. Our interface is therefore designed for this scenario and it mirrors manyof the features we presented in [63]. It is important to highlight, however, thatthe remote interface presented here is a completely new artifact based on a https://aws.amazon.com/ https://nodejs.org/ https://get.webgl.org/ https://github.com/NESTLab/argos3-webviz n Multi-Human Multi-Robot Remote Interaction 5 Fig. 1: System overview.different technology: in fact, the work in [63] studied proximal interactionswith a touch-based interface.3.1 Collective TransportWe employ a collective transport behavior based on the finite state machineshown in Fig. 3. The behavior is identical to the one discussed in our previouswork [63]. The states in the finite state machine are as follows:
Reach Object.
On receiving the desired goal position for the object,the robots in the transport team navigate and organize themselves aroundthe object in a circular manner. These positions are generated based on thenumber of robots in the team and their distance from the object. The statecomes to an end once all the robots reach their designated positions.
Jayam Patel et al.
Fig. 2: Screenshot of the interface running on an internet browser.
Approach Object.
After organizing themselves, the robots move towardsthe centroid of the object. The state comes to an end once all the robots aretouching the object.
Push Object.
Once the robots are in contact with the object, the robotsrotate in place to face the direction of the goal. The robots start moving atequal speed towards the goal, while maintaining a fixed distance from thecentroid of the object. This strategy prevents the robot in front and on thesides from breaking formation. If a robot breaks the formation, the robotsswitch back to Reach Object, wait for its completion, and subsequently resumetheir transport operation. The state comes to an end once the object reachesthe goal position.
Rotate Object.
The robots rearrange themselves around the object andmove in a circular path in the outward direction, thereby rotating the objectin place. If any robot breaks the formation, the robots rearrange themselvesand resume rotating the object. The state comes to an end once the robotsachieve the desired rotation.3.2 User Interface
Object Manipulation.
Object manipulation is triggered when an operatorselects an object with a left click. The goal position always requires a rightclick, and the interface overlays the selected object with a transparent bound-ing box. The operator can also define the goal position for multiple objects. In n Multi-Human Multi-Robot Remote Interaction 7
Fig. 3: Collective transport state machine.this case, the robots autonomously distributed across the objects and trans-port them using the collective transport behavior. If two or more operatorsmanipulate the same object, the interface keeps the position specified by thelast operator. Fig. 4a shows a selected object overlaid with a bounding box.Fig. 4b illustrates how the goal position is visualized. The desired positionand orientation of the object is conveyed by the interface as shown in Fig. 4cand 4d.
Robot Manipulation.
Robot manipulation starts with an operator se-lecting a robot with a left click. The goal position is assigned using a rightclick. The interface overlays the selected robot with a transparent boundingbox convey the current selection. The operator can define the goal position formultiple robots at once. If the robot is performing the collective transport be-havior during this request, other robots in the collective transport team pausetheir operation until the selected robot reaches the desired position. In case therobot is a part of an operator-defined team, the selected robot navigates to thenewly specified position and other robots continue their respective operations.When two or more operators want to manipulate the same robot, the interfaceprocesses the position specified by the last operator. Fig. 5a shows a selectedrobot overlaid with a bounding box to visualize the current selection. Fig. 5bshows the goal position determined by the operator and visualized as a coloredrepresentation of the selected robot. The color of the goal position matchesthe color of the fiducial markers to differentiate between the goal positions of
Jayam Patel et al.(a) Object recognition (b) New Goal Defined(c) Robots push the object (d) Robots rotate the object
Fig. 4: Object manipulation by interaction with the object through the inter-face.different robots. Fig. 5c shows the selected robot navigating to the specifiedgoal position.
Robot Team Selection and Manipulation.
In addition to manipulat-ing a single robot, the operator can select a team of robots by pressing controlkey and clicking the left mouse button. The goal position is still assigned witha right click. The interface overlays a transparent bounding box over all theselected robots to identify the current selection. If two or more operators havethe same robot in their team, then the common robot navigates to the positionspecified by the last operator without affecting other robots in other teams.Fig. 6a shows a screenshot in which the selected robots are overlaid with abounding box. Fig. 6b shows the goal position visualized as colored virtualobjects, one for each of the selected robots. The color of the virtual objectsmatches the color of the fiducial markers on the body of the robots. Fig. 6cshows the robots navigating to the goal position. n Multi-Human Multi-Robot Remote Interaction 9(a) Robot selection (b) New robot position(c) Robot navigating to new position
Fig. 5: Robot manipulation by interacting with the robots through the inter-face.3.3 Transparency ModesTo investigate the role of various elements of the user interface, we endowed ourclient with the possibility to provide information to the user in several modali-ties. The main insight in our work is to consider the natural field of view of thehuman eye (see Fig. 7). We implemented our client to allow for both centraltransparency , i.e., displaying elements in the center of the screen or directlyabove robots and objects (green region in Fig. 7); and peripheral transparency ,i.e., relegating interface elements to the borders of the screen (yellow region inFig. 7). The key difference between central and peripheral transparency is thetype and quantity of information displayed. With central transparency, theinformation is contextual and limited to the robots effectively visible on thescreen (which changes as the operator modifies the camera pose). Peripheraltransparency, on the other hand, always displays summary information on allthe robots and the progress of each task.The interface can be configured to show or hide every element. For thepurposes of our work, we identified four essential “transparency modes”:
Fig. 6: Robot team creation and manipulation by interacting with the interface. – No Transparency (NT).
The interface hides all the information origi-nated by the robots or other operators. The operator can still interact withrobots and objects using all the control modalities. – Central Transparency (CT).
The interface overlays a direction pointerand text to indicate the heading and current task of each robot (as shownin Fig. 8). The color of the pointer resembles the color of the fiducialmarkers on each robot to differentiate between multiple pointers. The robotstatus displays the current operation executed by the robot correspondingto the states of the collective transport finite state machine (see Fig. ?? ).Additionally, the interface indicates the commands of other operators, tofoster shared awareness across operators. This information is available onlyfor entities in the operator’s field-of-view. The operator can move aroundin the environment to view information of other robots and objects thatare not in the current field-of-view. – Peripheral Transparency (PT). The interface offers a robot panel, anobject panel, and a log window containing global information on the sys-tem and its constituents (see Fig. 9). The robot panel contains one icon foreach robot. The panel highlights the icon corresponding to the robots thatare moving or performing operator-defined actions. The panel also displaysa warning, through a blinking exclamation point, to notify the operators n Multi-Human Multi-Robot Remote Interaction 11
Fig. 7: Central and peripheral regions of the field of view. The overlaid greenregion indicates the central field of view. The overlaid yellow region indicatesthe peripheral field of view.of any fault conditions. These include getting stuck due to an obstacle,and software or hardware failures. The object panel shows all the objectsin the environment. The interface highlights the objects currently manip-ulated by the robots. The panel also provides a functionality to select anobject by clicking on the lock icon. An operator can convey their intentionof manipulating an object by selecting the lock in the object panel. Theinterface highlights the lock with a blue icon to signify own selection anda red icon to indicate the selection of another operator. An operator canlock only one object at a time and cannot overwrite the selection of otheroperators. – Mixed Transparency (MT).
The interface also allows one to enableboth central and peripheral transparency. In this case, the displayed infor-mation is a combination of the two transparency modes.3.4 Communication ModesAnalogously to transparency modes, the interface also defines different modesfor inter-human communication. We classify inter-human communication into
Fig. 8: Central transparency showing on-robot status and directional indicator.direct, indirect, and a combination of both. The communication modes aredescribed as follows. – No Communication (NC).
In this mode, the operators are completelyunable to communicate with each other. The interface hides all the infor-mation originating from other operators, such as which robots are beingused and which objects are being manipulated. – Direct Communication (DC).
In this mode, the operators can commu-nicate verbally while performing the task. We established a verbal com-munication channel using Zoom , a video-conferencing application. Theoperators are allowed to ask for help and strategize at will towards thecompletion of the task. – Indirect Communication (IC). In contrast to direct communication,in this mode the operators cannot verbally communicate their intentionsand actions, but they can use the presented transparency modes to com-municate indirectly. In this paper, the choice of which transparency modeis active was determined by us at experiment time for the purposes of ourstudy. In a realistic setting, however, each operator is allowed to choosethe most appropriate mode. n Multi-Human Multi-Robot Remote Interaction 13 Fig. 9: Peripheral transparency mode showing robot panel, object panel anda log (left to right). – Mixed Communication (MC).
In this mode, the operators can com-municate both directly and indirectly throughout the duration of the ex-periment. T ) and communication ( C ) modes under idealconditions in remote interaction ( R ), i.e., with negligible loss of information.We base the experiments on the following main hypotheses. Hypotheses on the impact of different transparency modes:– H RT Mixed transparency (MT) has the best outcome with respect toother modes. – H RT Operators prefer mixed transparency (MT) over other modes. – H RT Operators prefer central transparency (CT) over peripheral trans-parency (PT).
Hypotheses on the impact of different communication modes:
Fig. 10: Remote study experiment setup. – H RC Mixed communication (MC) has the best outcome with respect toother modes. – H RC Operators prefer mixed communication (MC) over other modes. – H RC Operators prefer direct communication (DC) over indirect commu-nication (IC).
Experimental Setup.
We designed a game scenario (shown in Fig. 10)where the operators were given 9 robots to transport 6 objects (2 big and 4small) to a goal region. Big objects were worth 2 points each, and small objectswere worth 1 point each. The operators had to work as a team to score as manypoints as possible, over a maximum of 8, in experiments lasting 8 minutes. Theoperators could move the big objects using the collective transport behavior,or directly use individual robots or team manipulation commands to push theobjects.
Participant Sample.
For this user study, we recruited 28 university stu-dents. 14 of them (5 female, 9 male), with ages ranging from 19 to 37 yearsold (23 . ± . . ± . Procedures.
Each session of the study had two participants and approx-imately took a total of 105 minutes. After signing the consent form, we ex- n Multi-Human Multi-Robot Remote Interaction 15- Did you understand your teammate’s intentions ? Were you able to understand whyyour teammate was taking a certain action?- Could you understand your teammate’s actions ? Could you understand what your team-mate was doing at any particular time?- Could you follow the progress of the task ? While performing the tasks, were you ableto gauge how much of it was pending?- Did you understand what the robots were doing ? At all times, were you sure how andwhy the robots were behaving the way they did?- Was the information provided by the interface clear to understand ? Fig. 11: The subjective questionnaire employed in our user study to assess thequality of interaction of an operator with our interface.plained the task and gave each participant 10 minutes to familiarize with thesystem. We randomized the order of the tasks and the modalities to reduce theinfluence of learning effects. After each task, the participants had to answer asubjective questionnaire.
Metrics.
We recorded subjective and objective measures for each partici-pant and each task. We used the following common measures: – Situational Awareness.
We used the Situational Awareness Rating Tech-nique (SART) [76] on a 4-point Likert scale [49] to assess the awareness ofthe situation after each task. – Task Workload.
We used the NASA TLX [28] scale on a 4-point Likertscale to compare the perceived workload in each task. – Trust.
We used the trust questionnaire [79] on a 4-point Likert scale tocompare the trust in the interface affected by each transparency mode. – Quality of Interaction.
We used a custom questionnaire on a 5-pointLikert scale to assess the team-level and robot-level interaction. The inter-action questionnaire is reported in Fig. 11. – Performance.
We used the points earned for each task as a metric toscale the performance achieved for each transparency mode. – Usability.
We asked participants to select the features (log, robot panel,object panel, and on-robot status) they used during the study. Additionally,we asked them to rank the transparency modes from 1 to 4, 1 being thehighest rank.4.2 Analysis and Discussion
Transparency Data.
Table 1 shows the summarized results for all the sub-jective scales and the objective performance. We used the Friedman test [25]to analyze the data and assess the significance between different modes oftransparency. We derived a ranking based on the mean ranks for all the at-tributes that showed statistical significance ( p < .
05) or marginal significance
Table 1: Results with relationships between transparency modes. The rela-tionships are based on mean ranks obtained through a Friedman Test. Thesymbol ∗ denotes significant difference ( p < .
05) and the symbol ∗∗ denotesmarginally significant difference ( p < . − denotes negativescales where lower ranking is better. Attributes Relationship χ (3) p -valueSART SUBJECTIVE SCALE Instability of Situation − NT > PT > CT > MT ∗∗ .
554 0 . − NT > PT > CT > MT ∗∗ .
950 0 . − not significant 2 .
452 0 . > CT > PT > NT ∗∗ .
550 0 . > CT > PT > NT ∗∗ .
898 0 . .
209 0 . > CT > PT > NT ∗∗ .
288 0 . > CT > PT > NT ∗∗ . < . > MT > PT > NT ∗ .
276 0 . NASA TLX SUBJECTIVE SCALE
Mental Demand − NT > PT > CT=MT ∗∗ .
800 0 . − not significant 5 .
634 0 . − not significant 1 .
760 0 . .
169 0 . − PT > NT > MT > CT ∗∗ .
630 0 . − not significant 0 .
667 0 . TRUST SUBJECTIVE SCALE
Competence MT > CT > PT > NT ∗∗ .
663 0 . > CT > PT > NT ∗∗ . < . > CT > PT > NT ∗ .
478 0 . > CT > PT > NT ∗∗ .
138 0 . > CT > PT > NT ∗∗ . < . > CT > PT > NT ∗∗ .
590 0 . INTERACTION SUBJECTIVE SCALE
Teammate’s Intent MT > CT > PT > NT ∗∗ .
923 0 . > CT > NT > PT ∗∗ .
040 0 . > CT > PT > NT ∗ .
532 0 . > CT > PT > NT ∗∗ .
593 0 . > MT > PT > NT ∗∗ .
414 0 . PERFORMANCE OBJECTIVE SCALE
Points Scored not significant 3 .
444 0 . ( p < . Communication Data.
Table 3 shows the summarized results of thecommunication user study. We analyzed the data using the Friedman test [25]to assess the significant relationships among different modes of communication.We used statistical significance ( p < .
05) and marginal significance ( p < . n Multi-Human Multi-Robot Remote Interaction 17 Fig. 12: Feature usability in the transparency user study.Table 2: Ranking scores, in the transparency user study, based on the Bordacount. The gray cells indicate the leading scenario for each type of ranking.
Borda Count NT CT PT MTBased on Collected Data Ranking (Table 1) 22 63.5 38 76.5Based on Preference Data Ranking (Fig. 13) 16 40 29 55 to derive a ranking based on their mean ranks. Fig. 14 shows the percentageof operators using a particular feature. Fig. 15 shows the percentage of peopleranking task based on their choice. Using the Borda count method, we derivedan overall ranking based on the collected data and the user preference data(shown in Table 4).We inverted the ranking of the negative scales for the Bordacount scores.
Table 2 shows that mixed transparency (MT) is the best transparency modein terms of usability, supporting hypotheses H RT and H RT . From the results,central transparency (CT) dominates peripheral transparency (PT), support-ing hypothesis H RT . In addition to this, we also analyzed the modes of trans-parency based on the sub-scales of the subjective data and further analysedfor each mode as follows. Fig. 13: Task preference in the transparency user study.
Mixed Transparency.
This mode is the overall best choice for the op-erators. The results suggest that this mode provides the operators with thebest situational awareness, measured in terms of least instability of situation,complexity of situation, best information arousal, level of concentration, infor-mation quality, and information quantity. Through this transparency mode,the operators had the most information about actions and intentions of team-mates and robots, as well as of the task progress. This led the operators toreport the highest trust across all trust sub-scales.
Central Transparency.
This mode is the second best choice after mixedtransparency. The operators had the best familiarity and clarity in terms ofinformation provided by the interface. The operators experienced the lowestmental load and reported the least effort in performing the task. Fig. 12 sup-ports these findings as 92% (13 out of 14 operators) indicated the on-robotstatus as the most useful feature.
Peripheral Transparency.
The operators reported peripheral transparencyas the most cumbersome mode. The operators experienced the lowest aware-ness, which caused degraded trust. The operators reported that the modewas merely better than no transparency (NT), because the presence of some information is still better than no information. Comparison with Proximal Interaction.
Overall, the conclusions ofthis study are in line those we reported for proximal interaction. However,the results in this paper are more substantial compared to what we observedfor proximal interaction. Unlike proximal interaction, mixed transparency in n Multi-Human Multi-Robot Remote Interaction 19
Table 3: Results with relationships between communication modes. The re-lationship are based on mean ranks obtained through Friedman Test. Thesymbol ∗ denotes significant difference ( p < .
05) and the symbol ∗∗ denotesmarginally significant difference ( p < . − denotes negativescales and lower ranking is a good ranking. Attributes Relationship χ (3) p -valueSART SUBJECTIVE SCALE Instability of Situation − NC > DC > IC > MC ∗∗ . < . − NC > IC > DC > MC ∗∗ .
921 0 . − NC > DC > IC > MC ∗∗ .
280 0 . > DC > IC > NC ∗∗ . < . > DC > IC > NC ∗∗ . < . > DC > IC > NC ∗∗ . < . .
286 0 . .
168 0 . > DC > IC > NC ∗∗ .
282 0 . NASA TLX SUBJECTIVE SCALE
Mental Demand − NC > IC > DC > MC ∗∗ . < . − NC > IC > DC > MC ∗∗ .
870 0 . − NC > IC > DC > MC ∗∗ .
433 0 . > DC > IC > NC ∗∗ .
429 0 . − NC > IC > DC > MC ∗∗ . < . − NC > IC > DC > MC ∗∗ .
961 0 . TRUST SUBJECTIVE SCALE
Competence MC > DC > IC > NC ∗∗ . < . > IC > DC > NC ∗∗ .
059 0 . > IC > DC > NC ∗ .
861 0 . > DC > IC > NC ∗∗ .
425 0 . > DC > IC > NC ∗∗ .
396 0 . > DC > IC > NC ∗∗ .
171 0 . INTERACTION SUBJECTIVE SCALE
Teammate’s Intent MC > DC > IC > NC ∗∗ . < . > DC > IC > NC ∗∗ . < . > DC > IC > NC ∗∗ .
176 0 . > IC > DC > NC ∗∗ .
991 0 . > IC > DC > NC ∗∗ . < . PERFORMANCE OBJECTIVE SCALE
Points Scored not significant 3 .
444 0 . remote interaction was the clear winner, both from the collected data rankingand the preference data ranking (see Table. 2). Central transparency not onlyoutperformed peripheral transparency in remote interaction, but dominatedthe results when compared to the findings of the study with proximal inter-action. We speculate that this difference is due to the fact that, in proximalinteraction, the operators had to devote effort to avoid bumping into robotsand other operators while walking. This made the operators alert and anx-ious, affecting their focus on the information offered by the interface and thetransparency modes. In remote interaction, as there was no need to physicallymove, the operators could focus on the displayed information more effectively. Fig. 14: Feature usability in the communication user study.Table 4: Ranking scores, in the communication user study, based on the Bordacount. The gray cells indicate the leading scenario for each type of ranking.
Borda Count NC DC IC MCBased on Collected Data Ranking (Table 3) 24 67 53 96Based on Preference Data Ranking (Fig. 15) 16 38 30 56
Our experiments did not reveal a substantial difference in performanceacross transparency modes. We hypothesize that this lack of difference is dueto the learning effect across the four runs that each team had to perform.Fig. 16 shows the performance in each task and Fig. 17 reports the increase inperformance due to the task order (learning effect). As most of the teams wereable to complete the task in less than 8 minutes, Fig. 18 shows the decreasein time taken to complete the task in order of the performed task, indicatingthe impact of the learning effect.
Table 4 suggests that mixed communication (MC) is the best mode of com-munication, both in terms of usability preference and in terms of the datacollected during the user study, supporting hypotheses H RC and H RC . Inaddition, direct communication (DC) outperformed indirect communication n Multi-Human Multi-Robot Remote Interaction 21 Fig. 15: Task preference in the communication user study.(IC), confirming hypothesis H RC . We also analysed the modes of communi-cation based on the sub-scales of the subjective data and further analysed foreach mode. Mixed Communication.
Mixed communication was recognized as thebest mode, not only based on the Borda count but also looking at the resultsof the subjective data. This mode had the best situational awareness, trustin the system, and interaction with the robots and the operator, while havingthe lowest task load.
Direct Communication.
This mode was the second best. It outper-formed indirect communication in terms of information awareness and com-munication with the other operator (operator-level information), resulting inbetter trust in the system and lower workload with respect to indirect com-munication.
Indirect Communication.
This mode was the third best choice. Thismode proved to be better in conveying robot-level information, thus allowingthe operator to better understand and predict robot actions, when comparedto direct communication. This made the operators trust this mode more interms of predictability and reliability, but at the cost of experiencing higherworkload in comparison to mixed communication and direct communication.
Comparison with Proximal Interaction.
Analogously to what we saidabout transparency, these observations are in line with the results of the proxi-mal interaction study [61]. However, the results of this study were more decisivewith respect to the proximal interaction study. Also in this case, we observed
Fig. 16: Task performance for each transparency mode.Fig. 17: Learning effect in the transparency user study based on points scored.that the proximal interaction made the operators alert and anxious aboutrobots and the other operator. Also, as the operators had to physically walk n Multi-Human Multi-Robot Remote Interaction 23
Fig. 18: Learning effect in the transparency user study based on time taken tocomplete the task.around other robots, the interaction felt at times cumbersome. This observa-tion is supported by workload results of the proximal interaction studies inour previous work, indicating high workload experienced in all modes of com-munication. In contrast, the results of workload in remote interaction showedsignificant difference between communication modes.Our experiments did not reveal a significant difference in performanceacross communication modes. Similarly to what we discussed for transparency,we hypothesize that this lack of difference is due to the learning effect acrossthe four runs that each team had to perform. Fig. 19 indicates the pointsearned by the operators in each task and Fig. 20 shows the learning effectas the increase in points earned in order of the performed task. As most ofthe operator teams were able to complete the task earlier than 8 minutes,Fig. 21 shows the decrease in time taken to complete the task in order of theperformed task as a clear indicator of the learning effect.
The study presented so far was based on the assumption that the informationflow was fast and continuous for every operator. This was possible becauseall the users involved in our experimental evaluation had fast, stable Internetconnections that showed no issues. However, in remote operations, fast andstable connectivity cannot be taken for granted.
Fig. 19: Task performance for each communication mode.Fig. 20: Learning effect in the communication user study.For this reason, we investigate the role that intermittent information flowplays in the efficiency of remote multi-human multi-robot interaction. In this n Multi-Human Multi-Robot Remote Interaction 25
Fig. 21: Learning effect in the communication user study.paper, we measure information loss as the time elapsed between two updates ofthe graphical user interface. In other words, we define information loss as theinverse of the frame rate. With operators and robots in separate environments,it is likely for the operators to experience different levels of information loss.When this happens, we speak of heterogeneous information loss.For the purposes of our study, we categorize information loss in two rangesof usability. The high usability range (U H ) corresponds to levels of informa-tion loss that cause negligible discomfort in the operators that experience it.Conversely, we are in low usability range (U L ) when the level of informationloss is such that an operator cannot ignore its presence, experiencing somesort of discomfort.In general, the exact extent of these ranges changes with the operators. Wethus split our study in two parts. In the pilot study (Sec. 5.1), we investigatethe extent of the usability ranges in experiments that involve a single operator.Next, in the main study (see Sec. 5.2), we turn to multiple operators and assessthe effect of heterogeneous information loss, using the homogeneous case as abaseline reference.5.1 Information Loss Pilot Study Experimental Setup.
For our pilot study with a single operator we usedthe game scenario presented in Sec. 4 (see Fig. 10). The operator was taskedwith performing half of the game: moving 1 big object and 2 small objects.
In contrast to the previous game, we set no time limit to complete the task,instead declaring completion when the required objects reached the goal re-gion. Every participant had to perform the task 6 times with different levels ofinformation loss each time. The levels spanned from 0 s to 2.5 s in incrementsof 0.5 s. To compensate for possible learning effects or other confusing factors,we determined different level orderings: – Increasing order : information loss increases with every task. – Decreasing order : the information loss decreases with every task. – Random 1 : information loss is in the order { , . , . , , , . } s. – Random 2 : the reverse order with respect to Random 1. Participant Sample.
We recruited 20 university students (7 females, 13males) with ages ranging from 18 to 31 years old (22 . ± . Pilot Study Procedure.
Each session of the study took approximately90 minutes. After signing the consent form, we explained the task setup andgave the participant 12 minutes to familiarize with the system. After eachtask, the participant had to answer a subjective questionnaire.
Metrics.
We recorded the subjective and objective measures for each par-ticipant for each task. The performance of the operator was measured as timetaken to complete a task. We used the NASA TLX [28] scale on a 10-pointLikert scale to compare the perceived workload in each task. In addition tothe workload questionnaire, the participants were requested to report the ex-perienced discomfort on a 10-point Likert Scale, followed by a comment boxfor free-form description of the type of discomfort experienced.
Results.
For each item in the NASA TLX scale, we report a significancematrix based on the Friedman test to identify the two ranges of usability. Theresults are shown in Tables 5-12. The green cells in these tables indicate thehigh usability range and the red cells indicate the low usability range. We alsosuperimposed the usability ranges in Table 13. From our data, we estimate thehigh usability range between 0 s and 0.5 s, and the low usability range between2 s and 2.5 s. For the upcoming main study on information loss (Sec. 5.2), wetook the midpoints of these ranges (0.25 s and 2.25 s). Figures 22-29 report thebox plots of the recorded readings for the respective metrics.5.2 Information Loss Main Study
Experimental Setup.
The final study we performed concerns the role ofinformation loss in remote interaction between multiple humans and multiplerobots. A particular aspect we intent to explore is the role of heterogeneousinformation loss across operators. To this aim, we consider also the homoge-neous case as a baseline. From the results of the pilot study in Sec. 5.1, weidentified two levels of information loss: a low level, corresponding to high us-ability (0.25 s), and a high level, corresponding to low usability (2.25 s). We n Multi-Human Multi-Robot Remote Interaction 27
Table 5: Significance matrix for differences in performance between levels ofinformation loss. The shaded regions indicate the two ranges of usability. Thecell entries are the p -values based on the Friedman test. The empty cells rep-resent a comparison with no significant difference. Performance 0s 0.5s 1s 1.5s 2s 2.5s0s < < Table 6: Significance matrix for differences in mental load between levels ofinformation loss. The shaded regions indicate the two ranges of usability. Thecell entries are the p -values based on the Friedman test. The empty cells rep-resent a comparison with no significant difference. ML 0s 0.5s 1s 1.5s 2s 2.5s0s < < < < < < Table 7: Significance matrix for differences in physical load between levelsof information loss. The shaded regions indicate the two ranges of usability.The cell entries are the p -values based on the Friedman test. The empty cellsrepresent a comparison with no significant difference. PL 0s 0.5s 1s 1.5s 2s 2.5s0s < < < < again used our collective transport game scenario and asked every participantto perform four experiments, one for each combination of levels of informa-tion loss for the operators. Once more, we randomized the order of the tasksto mitigate learning effects and other artifacts. In the following figures andtables, we use the following symbols to denote the four cases: – Ho LL : low homogeneous information loss; – Ho HH : high homogeneous information loss; Table 8: Significance matrix for differences in temporal load between levelsof information loss. The shaded regions indicate the two ranges of usability.The cell entries are the p -values based on the Friedman test. The empty cellsrepresent a comparison with no significant difference. TL 0s 0.5s 1s 1.5s 2s 2.5s0s
Table 9: Significance matrix for differences in perceived performance betweenlevels of information loss. The shaded regions indicate the two ranges of usabil-ity. The cell entries are the p -values based on the Friedman test. The emptycells represent a comparison with no significant difference. PP 0s 0.5s 1s 1.5s 2s 2.5s0s Table 10: Significance matrix for differences in effort between levels of infor-mation loss. The shaded regions indicate the two ranges of usability. The cellentries are the p -values based on the Friedman test. The empty cells representa comparison with no significant difference. E 0s 0.5s 1s 1.5s 2s 2.5s0s – He LH : heterogeneous information loss in which operator 1 has low loss andoperator 2 has high loss; – He HL : heterogeneous information loss in which the operators are reversedwith respect to He LH . Hypotheses.
We seek to validate the following working hypotheses: – H IL : The case of low homogeneous information loss is the best overallwith respect to the other cases in terms of measured metrics. n Multi-Human Multi-Robot Remote Interaction 29 Table 11: Significance matrix for differences in frustration between levels ofinformation loss. The shaded regions indicate the two ranges of usability. Thecell entries are the p -values based on the Friedman test. The empty cells rep-resent a comparison with no significant difference. F 0s 0.5s 1s 1.5s 2s 2.5s0s < < < < < < < < Table 12: Significance matrix for differences in visual discomfort between levelsof information loss. The shaded regions indicate the two ranges of usability.The cell entries are the p -values based on the Friedman test. The empty cellsrepresent a comparison with no significant difference. VD 0s 0.5s 1s 1.5s 2s 2.5s0s Table 13: Overlaid significance matrices for determining the range of operabil-ity.
VD 0s 0.5s 1s 1.5s 2s 2.5s0s0.5s1s1.5s2s2.5s – H IL : The operators prefer low homogeneous information loss to the othercases. – H IL : In the heterogeneous information loss case, operators prefer to bethe ones with low information loss. – H IL : Operators prefer to experience high information loss in the hetero-geneous case to being in the high homogeneous loss case. Participant Sample.
We randomly paired the participants of the pilotstudy, forming 10 teams. Each team went through the four aforementionedcases.
Procedures.
Each session took approximately 105 minutes. Each sessionbegan with a training period, followed by 12 minutes of independent explo-
Fig. 22: Box plot for performance, in time taken to complete the task. Loweris better.ration of the system by the participants. After each session, each participanthad to answer a subjective questionnaire.
Metrics.
We recorded subjective objective metrics for each participant andfor each case. We used the same metrics presented in Sec. 4. In addition, werecorded the number of interactions the participants made with the interface,as well as the time interval between those interactions. This allowed us toanalyze the difference in workload between operators of the same team.
Results.
Tables 14 and 15 show the summarized results for the subjectivescales and the objective metrics. We used the Friedman test to establish signif-icance between different cases. We formed rankings based on the mean ranksfor all the attributes that showed statistical significance ( p < .
05) or marginalsignificance ( p < . Pilot Study Data Analysis.
Tables 5-29 and Figures 22-28 indicate that,with the increase in information loss, the workload experienced by the operator n Multi-Human Multi-Robot Remote Interaction 31
Fig. 23: Box plot for reported mental load. Lower is better.Fig. 24: Box plot for reported physical load. Lower is better.
Fig. 25: Box plot for reported temporal load. Lower is better.Fig. 26: Box plot for reported perceived performance. Higher is better. n Multi-Human Multi-Robot Remote Interaction 33
Fig. 27: Box plot for reported effort. Lower is better.Fig. 28: Box plot for reported frustration. Lower is better.
Fig. 29: Box plot for reported visual discomfort. Lower is better.increases while performance degrades. We compared the number of interactionsmade with each level of information loss, and found no significant difference.We also recorded the time interval between interactions. The box plot of themedian values (shown in Fig. 31) indicates a significant increase ( χ (1) =30 . p < . waiting strategy observed in user studies with traditional tele-operationand remote interaction systems [20]. Pilot Study Behavioral Analysis.
We observed the behaviour of theoperators during and after each session. Two operators (out of 20) chose tostop their session with 2 s and 2.5 s of information loss. They reported thatthey had reached their ability to handle the high information loss. Elevenoperators reported that they had reached their limit of frustration at 2.5 s,but nevertheless chose to continue because of their never give up attitude andtheir willingness to help our research. Seven operators reported that they couldhave handled higher than 2.5 s of loss because of their past experience withlaggy systems and internet. As for the discomfort experienced by the opera-tors, three operators started experiencing discomfort with 1 s of informationloss; four operators with values over 1.5 s; three operators with informationloss over 2 s; and six operators with information loss over 2.5 s. The reporteddiscomfort included a slight headache and fatigue in their eyes. As a part of theexit interview for the pilot study, we asked the participants if the task orderassigned to them impacted their performance in the study. The participants n Multi-Human Multi-Robot Remote Interaction 35
Table 14: Results of subjective scales with relationships between levels of in-formation loss. The relationships are based on mean ranks obtained throughFriedman tests. The symbol ∗ denotes a significant difference ( p < .
05) andthe symbol ∗∗ denotes a marginally significant difference ( p < . − denotes negative scales where lower ranking is better. Attributes Relationship χ (3) p -valueSART SUBJECTIVE SCALE Instability of Situation − Ho HH > He HL > He LH > Ho LL . < . − Ho HH > He HL > He LH > Ho LL . < . − Ho HH > He HL > He LH > Ho LL . < . LL > He LH > Ho HH > He HL . < . LL > He LH > He HL > Ho HH . < . LL > He LH > Ho HH > He HL .
112 0 . LL > He LH > Ho HH > He HL .
014 0 . LL > He LH > He HL > Ho HH .
464 0 . LL > He LH > Ho HH > He HL .
949 0 . NASA TLX SUBJECTIVE SCALE
Mental Demand − Ho HH > He HL > He LH > Ho LL .
112 0 . − Ho HH =He HL > He LH > Ho LL .
089 0 . − not significant 5 .
447 0 . LL > He LH > He HL > Ho HH . < . − Ho HH > He HL > He LH > Ho LL . < . − Ho HH > He HL > He LH > Ho LL . < . TRUST SUBJECTIVE SCALE
Competence Ho LL > He LH > He HL > Ho HH . < . LL > He LH > He HL > Ho HH . < . LL > He LH > He HL > Ho HH . < . LL > He LH > He HL > Ho HH . < . LL > He LH > He HL > Ho HH . < . LL > He LH > He HL > Ho HH . < . INTERACTION SUBJECTIVE SCALE
Teammate’s Intent not significant 5 .
880 0 . LL > Ho HH > He HL > He LH .
718 0 . LL > He LH > Ho HH > He HL . < . LL > He LH > Ho HH > He HL . < . LL > Ho HH > He LH > He HL .
703 0 . Table 15: Results of objective metrics with relationships between levels of in-formation loss. The relationships are based on mean ranks obtained throughFriedman tests. The symbol ∗ denotes a significant difference ( p < .
05) andthe symbol ∗∗ denotes a marginally significant difference ( p < . − denotes negative scales where lower ranking is better. Attributes Relationship χ (3) p -valuePERFORMANCE OBJECTIVE SCALE Time Taken for the task Ho HH > He HL =He LH > Ho LL .
803 0 . LH > Ho HH > Ho LL > He HL .
258 0 . HH > He HL > He LH > Ho LL .
220 0 . Table 16: Ranking scores based on the Borda count. The gray cells indicatethe best case for each type of ranking.
Borda Count Ho LL Ho HH He LH He HL Based on Collected Data Ranking (Tables 14 & 15) 104 36.5 74.5 45Based on Preference Data Ranking (Fig. 30) 77 29 52 42
Table 17: Results of subjective scales with attribute comparison between op-erators of the same team. The comparisons are based on mean ranks obtainedthrough the Friedman test. The grey cells represent significant differences be-tween operators in the same team.
Attributes Homogeneous IL HeterogeneousILHo LL Ho HH χ p -value χ p -value χ p -valueSART SUBJECTIVE SCALE Instability of Situation 0 1 0 1 0.6 0.439Complexity of Situation 3 0.083 0 1 0.529 0.467Variability of Situation 2.667 0.102 1.286 0.257 1.143 0.285Arousal 0.5 0.480 0.667 0.414 7.143 0.008Concentration of Attention 2.667 0.102 0.667 0.414 2.778 0.096Spare Mental Capacity 0.2 0.655 1.286 0.257 5.444 0.02Information Quantity 0.5 0.480 0.5 0.480 1.667 0.197Information Quality 0.5 0.480 0.143 0.750 5.444 0.02Familiarity with Situation 0.2 0.655 0 1 0.057 0.796
NASA TLX SUBJECTIVE SCALE
Mental Demand 0 1 0.5 0.48 3.257 0.071Physical Demand 0.333 0.564 0.111 0.739 1.143 0.285Temporal Demand 1 0.317 0.143 0.705 0.077 0.782Performance 2 0.157 0.111 0.739 7.143 0.008Effort 0 1 0.2 0.655 5.444 0.02Frustration 0.333 0.564 0 1 3.267 0.071
TRUST SUBJECTIVE SCALE
Competence 0 1 1.8 0.180 9.308 0.002Predictability 2 0.157 0 1 6.231 0.013Reliability 0.333 0.564 2.667 0.102 6.231 0.013Faith 0.333 0.564 0.667 0.414 3.769 0.052Overall Trust 0 1 0.2 0.655 6.231 0.013Accuracy 0 1 0.2 0.655 5.444 0.02
INTERACTION SUBJECTIVE SCALE
Teammate’s Intent 0.667 0.414 0 1 0.057 0.795Teammate’s Action 1.8 0.180 0.143 0.705 0 1Task Progress 0.333 0.564 2.667 0.102 2.579 0.108Robot Status 0.667 0.414 0.4 0.527 5.333 0.021Information Clarity 0.143 0.705 0.5 0.480 0.286 0.593 in the increasing order of information loss reported that the increase in lossmade them ready for the next task and they expected the loss to increase.They reported that, with each task, the familiarity with experiencing loss wasincreasing, causing them to be better trained at handling it. All the partic- n Multi-Human Multi-Robot Remote Interaction 37
Table 18: Results of quantitative scales with attribute comparison between op-erators of the same team. The comparisons are based on mean ranks obtainedthrough the Friedman test. The grey cells represent significant differences be-tween operators in the same team.
Attributes Homogeneous IL HeterogeneousILHo LL Ho HH χ p -value χ p -value χ p -valuePERFORMANCE OBJECTIVE SCALE Number of Interactions 0.111 0.739 1 0.317 0 1Time gap between interactions 0.4 0.527 1.6 0.206 0 1
Fig. 30: Operator preferences in information loss.ipants in this category reported that they would have been more frustratedif the task ordering was reversed and they would be most frustrated if theyhad to experience the maximum information loss in the first task. However,the participants in the decreasing order of information loss reported that theywould have been more frustrated if the information loss were increasing ineach task. All the participants in this cohort reported that, as the loss wasdecreasing, they knew the worst was over and the tasks will only get easierfrom there on. We call this the count one’s blessing phenomenon: the partici-pants preferred and defended their task order, assuming that the reverse orderwould only harm their performance and interaction quality.
Main Study Data Analysis.
Table 16 shows that Ho LL is the bestinformation loss case both in terms of usability preference and according tothe data collected during the user study. This supports our hypotheses H IL and H IL that low homogeneous information loss is the best overall case. The Fig. 31: Box plot of the recorded time gap between each interaction for eachoperator.He LH case is the next best choice for the participants, indicating preferencefor low personal information loss. This supports hypothesis, H IL . The He HL case is the third choice, showing that either operator experiencing low lossis still better than both operators experiencing high information loss. Thissupports hypothesis H IL . Main Study Behavioral Analysis.
We also observed the behaviour ofthe operators during and after the sessions. Based on the preference shownin Fig. 30, we could categorize the participants in four typologies. (a)
TheEgocentrics : ten participants gave higher preference to the tasks with low in-formation loss, and lower preference to the tasks with high information loss.However, when they had to rank their preference between the options of choos-ing low and high information loss for themselves and give the other to theirteammate, the participants opted for low information loss even though thatmeant that their teammate might get more frustrated by experiencing higherloss. (b)
The Altruists : five participants preferred to handle high informationloss so that their teammate might face lower levels of frustration while interact-ing with a low information loss. These participants, the altruists, reported thatthey were confident in their ability to handle high information loss, and withtheir teammate experiencing low information loss their chances of completingthe task might increase. (c)
The Egalitarians : four participants preferred ho-mogeneous loss over heterogeneous loss, even if that means that both operatorswould have to experience a high information loss. These participants reportedthat, with homogeneous information loss, they could actively interact with n Multi-Human Multi-Robot Remote Interaction 39 their teammate and handle equal workload, which they did not experience intasks with heterogeneous information loss. (d)
The Thinker : one participantpreferred high information loss over low information loss. This participant re-ported that high information loss provided more time to think before makingthe next step and could interact more with the fellow teammate while doingso.
On the Out-of-the-Loop Performance Problem.
Tables 17 and 18show that the participants experienced unbalanced awareness, workload, trustand interaction quality, while engaging in the tasks with heterogeneous in-formation loss. This imbalance indicates that the operator experiencing highinformation loss will go out of the loop [21,27]. However, the interaction qualityscales show that the significant difference in information awareness is observedonly for the robot-level information and not on operator-level information. Weconclude this as there was no loss or delay experienced in the communicationchannel for this user study; future work could investigate the impact of loss ofcommunication between the operators.
In this paper, we studied the effects of transparency, inter-human commu-nication, and information loss on multi-human multi-robot interaction. Wefirst performed a study of the most effective interface elements to supportinformation transparency and inter-operator transparency. We analyzed theusability of our interface through a user study with 28 operators measuringawareness, workload, trust, and interaction efficiency. The findings of the userstudy indicated mixed transparency as the best transparency mode and mixedcommunication as the best communication mode.We then studied the effects of information loss on the performance of theoperators. We performed two user studies. The first, a pilot study, aimed toidentify the amount of information loss that can be considered noticeable butbearable for the average operator, and which amount of information loss is un-bearable. Using the result of this study, we performed a thorough explorationof the role of information loss in multi-operator scenarios, comparing hetero-geneous and homogeneous cases. We derived a set of behavioral typologiesof users, revealing that remote interaction must consider personal preferencesand individual attitude when forming groups of operators.Future work will focus on the role of training in multi-human multi-robotinteraction. In this study, we assumed that no participant had prior experiencewith the interface, and we provided minimal guidance to avoid biasing ourstudies. However, effective multi-human multi-robot interaction for complexmissions cannot ignore the need for training and proper teaming according toindividual skills.
References
1. Ayanian, N., Spielberg, A., Arbesfeld, M., Strauss, J., Rus, D.: Controlling a team ofrobots with a single input. In: Robotics and Automation (ICRA), 2014 IEEE Inter-national Conference on, pp. 1755–1762. IEEE (2014). URL http://ieeexplore.ieee.org/abstract/document/6907088/
2. Baker, M., Casey, R., Keyes, B., Yanco, H.A.: Improved interfaces for human-robotinteraction in urban search and rescue. In: 2004 IEEE International Conference onSystems, Man and Cybernetics (IEEE Cat. No. 04CH37583), vol. 3, pp. 2960–2965.IEEE (2004)3. Bhaskara, A., Skinner, M., Loft, S.: Agent Transparency: A Review of Current Theoryand Evidence. IEEE Transactions on Human-Machine Systems pp. 1–10 (2020). DOI 10.1109/THMS.2020.2965529. URL https://ieeexplore.ieee.org/document/8982042/
4. Black, D.: Partial justification of the borda count. Public Choice pp. 1–15 (1976)5. Breazeal, C., Kidd, C.D., Thomaz, A.L., Hoffman, G., Berlin, M.: Effects of nonver-bal communication on efficiency and robustness in human-robot teamwork. In: 2005IEEE/RSJ international conference on intelligent robots and systems, pp. 708–713.IEEE (2005)6. Calhoun, G.L., Draper, M.H.: 11. multi-sensory interfaces for remotely operated ve-hicles. In: Human factors of remotely operated vehicles. Emerald Group PublishingLimited (2006)7. Chakraborti, T., Kulkarni, A., Sreedharan, S., Smith, D.E., Kambhampati, S.: Expli-cability? Legibility? Predictability? Transparency? Privacy? Security? The EmergingLandscape of Interpretable Agent Behavior. Proceedings of the international confer-ence on automated planning and scheduling , 86–96 (2019)8. Chen, J.Y., Durlach, P.J., Sloan, J.A., Bowens, L.D.: Human–robot interaction in thecontext of simulated route reconnaissance missions. Military Psychology (3), 135–149(2008)9. Chen, J.Y., Procci, K., Boyce, M., Wright, J., Garcia, A., Barnes, M.: SituationAwareness-Based Agent Transparency:. Tech. rep., Defense Technical Information Cen-ter, Fort Belvoir, VA (2014). DOI 10.21236/ADA600351. URL
10. Chen, J.Y.C., Haas, E.C., Barnes, M.J.: Human Performance Issues and User InterfaceDesign for Teleoperated Robots. IEEE Transactions on Systems, Man and Cybernetics,Part C (Applications and Reviews) (6), 1231–1245 (2007). DOI 10.1109/TSMCC.2007.905819. URL http://ieeexplore.ieee.org/document/4343985/
11. Chen, J.Y.C., Lakhmani, S.G., Stowers, K., Selkowitz, A.R., Wright, J.L., Barnes,M.: Situation awareness-based agent transparency and human-autonomy teaming ef-fectiveness. Theoretical Issues in Ergonomics Science (3), 259–282 (2018). DOI10.1080/1463922X.2017.1315750. URL
12. Cheung, Y., Chung, J.H.: Semi-Autonomous Control of Single-Master Multi-Slave Tele-operation of Heterogeneous Robots for Multi-Task Multi-Target Pairing. InternationalJournal of Control and Automation (3), 17 (2011)13. Collett, T., MacDonald, B.: Developer oriented visualization of a robot program: Anaugmented reality approach. In: Proc of the 2006 ACM Conference on Human-RoboticInteraction, pp. 2–4 (2006)14. Daily, M., Cho, Y., Martin, K., Payton, D.: World embedded interfaces for human-robotinteraction. In: 36th Annual Hawaii International Conference on System Sciences, 2003.Proceedings of the, pp. 6–pp. IEEE (2003)15. Dardona, T., Eslamian, S., Reisner, L.A., Pandya, A.: Remote Presence: Developmentand Usability Evaluation of a Head-Mounted Display for Camera Control on the daVinci Surgical System. Robotics (2), 31 (2019). DOI 10.3390/robotics8020031. URL
16. Darken, R.P., Peterson, B.: Spatial orientation, wayfinding, and representation. (2014)17. Dimitoglou, G.: Telepresence: Evaluation of Robot Stand-Ins for Remote Student Learn-ing. Journal of Computing Sciences in Colleges , 15 (2019)n Multi-Human Multi-Robot Remote Interaction 4118. Do, N.D., Yamashina, Y., Namerikawa, T.: Multiple Cooperative Bilateral Teleoperationwith Time-Varying Delay. SICE Journal of Control, Measurement, and System Integra-tion (2), 89–96 (2011). DOI 10.9746/jcmsi.4.89. URL http://japanlinkcenter.org/JST.JSTAGE/jcmsi/4.89?lang=en&from=CrossRef&type=abstract
19. Dong Gun Lee, Gun Rae Cho, Min Su Lee, Byung-Su Kim, Sehoon Oh, Hyoung IlSon: Human-centered evaluation of multi-user teleoperation for mobile manipulator inunmanned offshore plants. In: 2013 IEEE/RSJ International Conference on IntelligentRobots and Systems, pp. 5431–5438. IEEE, Tokyo (2013). DOI 10.1109/IROS.2013.6697142. URL http://ieeexplore.ieee.org/document/6697142/
20. Ellis, S.R., Mania, K., Adelstein, B.D., Hill, M.I.: Generalizeability of latency detec-tion in a variety of virtual environments. In: Proceedings of the Human Factors andErgonomics Society Annual Meeting, vol. 48, pp. 2632–2636. SAGE Publications SageCA: Los Angeles, CA (2004)21. Endsley, M.R., Kiris, E.O.: The out-of-the-loop performance problem and level of controlin automation. Human factors (2), 381–394 (1995)22. Esfahlani, S.S.: Mixed reality and remote sensing application of unmanned aerial ve-hicle in fire and smoke detection. Journal of Industrial Information Integration p.S2452414X18300773 (2019). DOI 10.1016/j.jii.2019.04.006. URL https://linkinghub.elsevier.com/retrieve/pii/S2452414X18300773
23. Ferreira, L.R.N., Pereira, L.T.: Immersive Mobile Telepresence Systems: A SystematicLiterature Review. Journal of Mobile Multimedia , 16 (2020)24. Feth, D., Tran, B.A., Groten, R., Peer, A., Buss, M.: Shared-Control Paradigms inMulti-Operator-Single-Robot Teleoperation. In: R. Dillmann, D. Vernon, Y. Naka-mura, S. Schaal, H. Ritter, G. Sagerer, R. Dillmann, M. Buss (eds.) Human Cen-tered Robot Systems, vol. 6, pp. 53–62. Springer Berlin Heidelberg, Berlin, Heidelberg(2009). DOI 10.1007/978-3-642-10403-9 6. URL http://link.springer.com/10.1007/978-3-642-10403-9_6 . Series Title: Cognitive Systems Monographs25. Friedman, M.: The use of ranks to avoid the assumption of normality implicit in theanalysis of variance. Journal of the american statistical association (200), 675–701(1937)26. Goertz, R.C., Thompson, W.M.: Electronically controlled manipulator. Nucleonics (US)Ceased publication (1954)27. Gouraud, J., Delorme, A., Berberian, B.: Autopilot, mind wandering, and the out ofthe loop performance problem. Frontiers in neuroscience , 541 (2017)28. Hart, S.G., Staveland, L.E.: Development of nasa-tlx (task load index): Results of empir-ical and theoretical research. In: Advances in psychology, vol. 52, pp. 139–183. Elsevier(1988)29. Hokayem, P.F., Spong, M.W.: Bilateral teleoperation: An historical survey. Automat-ica (12), 2035–2057 (2006). DOI 10.1016/j.automatica.2006.06.027. URL https://linkinghub.elsevier.com/retrieve/pii/S0005109806002871
30. Holdcroft, D.: Forms of indirect communication: An outline. Philosophy & Rhetoric pp.147–161 (1976)31. Hong, A., Bulthoff, H.H., Son, H.I.: A visual and force feedback for multi-robot tele-operation in outdoor environments: A preliminary result. In: 2013 IEEE Interna-tional Conference on Robotics and Automation, pp. 1471–1478. IEEE, Karlsruhe, Ger-many (2013). DOI 10.1109/ICRA.2013.6630765. URL http://ieeexplore.ieee.org/document/6630765/
32. Jang, I., Hu, J., Arvin, F., Carrasco, J., Lennox, B.: Omnipotent Virtual Giant forRemote Human-Swarm Interaction. arXiv:1903.10064 [cs] (2019). URL http://arxiv.org/abs/1903.10064 . ArXiv: 1903.1006433. Jingtai Liu, Lei Sun, Tao Chen, Xingbo Huang, Chunying Zhao: Competitive Multi-robot Teleoperation. In: Proceedings of the 2005 IEEE International Conference onRobotics and Automation, pp. 75–80. IEEE, Barcelona, Spain (2005). DOI 10.1109/ROBOT.2005.1570099. URL http://ieeexplore.ieee.org/document/1570099/
34. Jung, H., Song, Y.E.: Robotic remote control based on human motion via vir-tual collaboration system: A survey. Journal of Advanced Mechanical Design, Sys-tems, and Manufacturing (7), JAMDSM0126–JAMDSM0126 (2018). DOI 10.1299/jamdsm.2018jamdsm0126. URL http://ieeexplore.ieee.org/document/6677378/
39. Klow, J., Proby, J., Rueben, M., Sowell, R.T., Grimm, C.M., Smart, W.D.: Privacy,Utility, and Cognitive Load in Remote Presence Systems. In: Proceedings of the Com-panion of the 2017 ACM/IEEE International Conference on Human-Robot Interac-tion, pp. 167–168. ACM, Vienna Austria (2017). DOI 10.1145/3029798.3038341. URL https://dl.acm.org/doi/10.1145/3029798.3038341
40. Klow, J., Proby, J., Rueben, M., Sowell, R.T., Grimm, C.M., Smart, W.D.: Privacy,Utility, and Cognitive Load in Remote Presence Systems. In: M.A. Salichs, S.S.Ge, E.I. Barakova, J.J. Cabibihan, A.R. Wagner, A. Castro-Gonz´alez, H. He (eds.)Social Robotics, vol. 11876, pp. 730–739. Springer International Publishing, Cham(2019). DOI 10.1007/978-3-030-35888-4 68. URL http://link.springer.com/10.1007/978-3-030-35888-4_68 . Series Title: Lecture Notes in Computer Science41. Kolling, A., Sycara, K., Nunnally, S., Lewis, M.: Human Swarm Interaction: An Ex-perimental Study of Two Types of Interaction with Foraging Swarms. Journal ofHuman-Robot Interaction (2) (2013). DOI 10.5898/JHRI.2.2.Kolling. URL http://dl.acm.org/citation.cfm?id=3109714
42. Lager, M., Topp, E.A., Malec, J.: Remote Supervision of an Unmanned SurfaceVessel - A Comparison of Interfaces. In: 2019 14th ACM/IEEE InternationalConference on Human-Robot Interaction (HRI), pp. 546–547. IEEE, Daegu, Korea(South) (2019). DOI 10.1109/HRI.2019.8673100. URL https://ieeexplore.ieee.org/document/8673100/
43. Lane, J.C., Carignan, C.R., Sullivan, B.R., Akin, D.L., Hunt, T., Cohen, R.: Effects oftime delay on telerobotic control of neutral buoyancy vehicles. In: Proceedings 2002IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292),vol. 3, pp. 2874–2879. IEEE (2002)44. Lee, D., Franchi, A., Son, H.I., Ha, C., Bulthoff, H.H., Giordano, P.R.: SemiautonomousHaptic Teleoperation Control Architecture of Multiple Unmanned Aerial Vehicles.IEEE/ASME Transactions on Mechatronics (4), 1334–1345 (2013). DOI 10.1109/TMECH.2013.2263963. URL http://ieeexplore.ieee.org/document/6522198/
45. Lee, S., Lucas, N.P., Ellis, R.D., Pandya, A.: Development and human factors analysis ofan augmented reality interface for multi-robot tele-operation and control. In: UnmannedSystems Technology XIV, vol. 8387, p. 83870N. Baltimore, Maryland, USA (2012). DOI10.1117/12.919751. URL http://proceedings.spiedigitallibrary.org/proceeding.aspx?doi=10.1117/12.919751
46. Lewis, B., Sukthankar, G.: Two hands are better than one: Assisting users with multi-robot manipulation tasks. In: 2011 IEEE/RSJ International Conference on IntelligentRobots and Systems, pp. 2590–2595. IEEE, San Francisco, CA (2011). DOI 10.1109/IROS.2011.6094815. URL http://ieeexplore.ieee.org/document/6094815/
47. Li, H., Zhang, L., Kawashima, K.: Operator dynamics for stability condition in hapticand teleoperation system: A survey. The International Journal of Medical Roboticsand Computer Assisted Surgery (2), e1881 (2018). DOI 10.1002/rcs.1881. URL http://doi.wiley.com/10.1002/rcs.1881
48. Lichiardopol, S.: A Survey on Teleoperation. Technische Universitat Eindhoven, DCTreport , 34 (2007)49. LIKERT, R.: A technique for the measurement of attitudes. Arch Psych , 55 (1932).URL https://ci.nii.ac.jp/naid/10024177101/en/ n Multi-Human Multi-Robot Remote Interaction 4350. Lunghi, G., Marin, R., Di Castro, M., Masi, A., Sanz, P.J.: Multimodal Human-RobotInterface for Accessible Remote Robotic Interventions in Hazardous Environments.IEEE Access , 127290–127319 (2019). DOI 10.1109/ACCESS.2019.2939493. URL https://ieeexplore.ieee.org/document/8823931/
51. Lyons, J.B.: Being transparent about transparency: A model for human-robot interac-tion. In: 2013 AAAI Spring Symposium Series (2013)52. Ma, L., Yan, J., Zhao, J., Chen, Z., Cai, H.: Teleoperation System of Internet-BasedMulti-Operator Multi-Mobile-Manipulator. In: 2010 International Conference on Elec-trical and Control Engineering, pp. 2236–2240. IEEE, Wuhan, China (2010). DOI10.1109/iCECE.2010.551. URL http://ieeexplore.ieee.org/document/5631615/
53. MacKenzie, I.S., Ware, C.: Lag as a determinant of human performance in interactivesystems. In: Proceedings of the INTERACT’93 and CHI’93 conference on Human factorsin computing systems, pp. 488–493 (1993)54. Mansour, C., Shammas, E., Elhajj, I.H., Asmar, D.: Dynamic bandwidth manage-ment for teleoperation of collaborative robots. In: 2012 IEEE International Con-ference on Robotics and Biomimetics (ROBIO), pp. 1861–1866. IEEE, Guangzhou,China (2012). DOI 10.1109/ROBIO.2012.6491239. URL http://ieeexplore.ieee.org/document/6491239/
55. Massimino, M.J., Sheridan, T.B.: Teleoperator performance with varying force and vi-sual feedback. Human factors (1), 145–157 (1994)56. Miller, G.A.: The magical number seven, plus or minus two: Some limits on our capacityfor processing information. Psychological review (2), 81 (1956)57. Murphy, R.R.: A decade of rescue robots. In: 2012 IEEE/RSJ International Conferenceon Intelligent Robots and Systems, pp. 5448–5449. IEEE (2012)58. Music, S., Salvietti, G., Dohmann, P.B.g., Chinello, F., Prattichizzo, D., Hirche, S.: Hu-man–Robot Team Interaction Through Wearable Haptics for Cooperative Manipulation.IEEE Transactions on Haptics (3), 350–362 (2019). DOI 10.1109/TOH.2019.2921565.URL https://ieeexplore.ieee.org/document/8733002/
59. Nak Young Chong, Kotoku, T., Ohba, K., Komoriya, K., Matsuhira, N., Tanie, K.:Remote coordinated controls in multiple telerobot cooperation. In: Proceedings 2000ICRA. Millennium Conference. IEEE International Conference on Robotics and Au-tomation. Symposia Proceedings (Cat. No.00CH37065), vol. 4, pp. 3138–3143. IEEE,San Francisco, CA, USA (2000). DOI 10.1109/ROBOT.2000.845146. URL http://ieeexplore.ieee.org/document/845146/
60. Nielsen, C.W., Goodrich, M.A.: Comparing the usefulness of video and map informationin navigation tasks. In: Proceedings of the 1st ACM SIGCHI/SIGART conference onHuman-robot interaction, pp. 95–101 (2006)61. Patel, J., Ramaswamy, T., Li, Z., Pinciroli, C.: Direct and indirect communication inmulti-human multi-robot interaction. IEEE Transactions on Human-Machine Systems(2021). Submitted62. Patel, J., Xu, Y., Pinciroli, C.: Mixed-granularity human-swarm interaction. In:Robotics and Automation (ICRA), 2019 IEEE International Conference on. IEEE(2019)63. Patel, J., Xu, Y., Pinciroli, C.: Mixed-Granularity Human-Swarm Interaction. In: 2019IEEE International Conference on Robotics and Automation (ICRA). IEEE, Montreal,Canada (2019). URL http://arxiv.org/abs/1901.08522 . ArXiv: 1901.0852264. Patel, T.M., Shah, S.C., Pancholy, S.B.: Long Distance Tele-Robotic-Assisted Per-cutaneous Coronary Intervention: A Report of First-in-Human Experience. EClin-icalMedicine , 53–58 (2019). DOI 10.1016/j.eclinm.2019.07.017. URL https://linkinghub.elsevier.com/retrieve/pii/S2589537019301373
65. Pinciroli, C., Trianni, V., O’Grady, R., Pini, G., Brutschy, A., Brambilla, M., Mathews,N., Ferrante, E., Di Caro, G., Ducatelle, F., Birattari, M., Gambardella, L.M., Dorigo,M.: ARGoS: a modular, parallel, multi-engine simulator for multi-robot systems. SwarmIntelligence (4), 271–295 (2012)66. Rakita, D., Mutlu, B., Gleicher, M.: Remote Telemanipulation with Adapting View-points in Visually Complex Environments. In: Robotics: Science and Systems XV.Robotics: Science and Systems Foundation (2019). DOI 10.15607/RSS.2019.XV.068.URL , 86319–86335 (2019).DOI 10.1109/ACCESS.2019.2924938. URL https://ieeexplore.ieee.org/document/8746262/
71. Roundtree, K.A., Goodrich, M.A., Adams, J.A.: Transparency: Transitioning From Hu-man–Machine Systems to Human-Swarm Systems. Journal of Cognitive Engineeringand Decision Making p. 155534341984277 (2019). DOI 10.1177/1555343419842776. URL http://journals.sagepub.com/doi/10.1177/1555343419842776
72. Schauß, T., Groten, R., Peer, A., Buss, M.: Evaluation of a Coordinating Con-troller for Improved Task Performance in Multi-user Teleoperation. In: D. Hutchi-son, T. Kanade, J. Kittler, J.M. Kleinberg, F. Mattern, J.C. Mitchell, M. Naor,O. Nierstrasz, C. Pandu Rangan, B. Steffen, M. Sudan, D. Terzopoulos, D. Tygar,M.Y. Vardi, G. Weikum, A.M.L. Kappers, J.B.F. van Erp, W.M. Bergmann Tiest,F.C.T. van der Helm (eds.) Haptics: Generating and Perceiving Tangible Sen-sations, vol. 6191, pp. 240–247. Springer Berlin Heidelberg, Berlin, Heidelberg(2010). DOI 10.1007/978-3-642-14064-8 35. URL http://link.springer.com/10.1007/978-3-642-14064-8_35 . Series Title: Lecture Notes in Computer Science73. Shahzad, N., Chawla, T., Gala, T.: Telesurgery prospects in delivering healthcare inremote areas. The Journal of the Pakistan Medical Association (01), 4 (2019)74. Sheridan, T.B.: Humans and automation: System design and research issues. HumanFactors and Ergonomics Society (2002)75. Sheridan, T.B., Ferrell, W.R.: Remote manipulative control with transmission delay.IEEE Transactions on Human Factors in Electronics (1), 25–29 (1963)76. Taylor, R.M.: Situational awareness rating technique (sart): The development of a toolfor aircrew systems design. In: Situational awareness, pp. 111–128. Routledge (1990)77. Tomasello, M.: Origins of human communication. MIT press (2010)78. Tulli, S., Correia, F., Mascarenhas, S., Gomes, S., Melo, F.S., Paiva, A.: Effects ofagents’ transparency on teamwork pp. 22–37 (2019)79. Uggirala, A., Gramopadhye, A.K., Melloy, B.J., Toler, J.E.: Measurement of trust incomplex and dynamic systems using a quantitative approach. International Journal ofIndustrial Ergonomics (3), 175–186 (2004)80. Varkonyi, T.A., Rudas, I.J., Pausits, P., Haidegger, T.: Survey on the control of timedelay teleoperation systems. In: IEEE 18th International Conference on IntelligentEngineering Systems INES 2014, pp. 89–94. IEEE, Tihany, Hungary (2014). DOI10.1109/INES.2014.6909347. URL http://ieeexplore.ieee.org/document/6909347/
81. Watson, B., Walker, N., Ribarsky, W., Spaulding, V.: Effects of variation in systemresponsiveness on user performance in virtual environments. Human Factors (3),403–414 (1998)82. Welburn, E., Wright, T., Marsh, C., Lim, S., Gupta, A., Crowther, W., Watson, S.: AMixed Reality Approach to Robotic Inspection of Remote Environments. In: Embed-ded Inteligence: Enabling & Supporting RAS Technologies, pp. 72–74. UK-RAS Net-work (2019). DOI 10.31256/UKRAS19.19. URL
83. Wohleber, R.W., Stowers, K., Chen, J.Y., Barnes, M.: Effects of agent transparencyand communication framing on human-agent teaming. In: 2017 IEEE InternationalConference on Systems, Man, and Cybernetics (SMC), pp. 3427–3432. IEEE, Banff,AB (2017). DOI 10.1109/SMC.2017.8123160. URL83. Wohleber, R.W., Stowers, K., Chen, J.Y., Barnes, M.: Effects of agent transparencyand communication framing on human-agent teaming. In: 2017 IEEE InternationalConference on Systems, Man, and Cybernetics (SMC), pp. 3427–3432. IEEE, Banff,AB (2017). DOI 10.1109/SMC.2017.8123160. URL