Augmentix -- An Augmented Reality System for asymmetric Teleteaching
AAugmentix - An Augmented Reality System forasymmetric Teleteaching
Nico Feld [email protected] of TrierTrier, Germany
ABSTRACT
Using augmented reality in education is already a commonconcept, as it has the potential to turn learning into a mo-tivational learning experience. However, current researchonly covers the students site of learning. Almost no researchfocuses on the teachers’ site and whether augmented real-ity could potentially improve his/her workflow of teachingthe students or not. Many researchers do not differentiatebetween multiple user roles, like a student and a teacher. Toallow investigation into these lacks of research, a teachingsystem ”Augmentix” is presented, which includes a differen-tiation between the two user roles ”teacher” and ”student”to potentially enhances the teachers workflow by using aug-mented reality. In this system’s setting the student can ex-plore a virtual city in virtual reality and the teacher can guidehim with augmented reality.
KEYWORDS augmented reality, virtual reality, teleteaching, teleeducation,tangibles, collaborative virtual environment
Technologies such as augmented reality (AR) have the po-tential to enrich the physical world with virtual content byaugmenting the physical environment by digital informa-tion. This combination of (mixing) a physical with a virtualenvironment may simplify the interaction with this virtualcontent as the user still has reference to the physical, knownenvironment [13]. Since the technology of augmented realitystarted to rise, it was examined for benefits they can bring toteaching and education. Especially since the rapid improve-ments in mobile technologies (like smartphones) made themaccessible for a wider audience.Therefore, one scenario which we think could greatly bene-fit from using augmented reality would be virtual classrooms.The students can freely move around a virtual environmentand the teacher does use AR to guide them through the vir-tual environment and teach them about a specific topic. Thisscenario does not only allow distributed users to participatein such a learning experience, but technologies like AR couldfurther improve their experience [11]. Furthermore we think that AR could also improve theworkflow of the guide. By the combination of the real physi-cal environment and the virtual environment AR providesnatural and intuitive interaction, especially with the usagereal objects being used to interact with he virtual environ-ment, called ”Tangibles” [1]. This could allow the guide toteach the student in a natural and intuitive way, as he/shewould do in a real city tour, without having to adapt his/herworkflow as much as he would need to with non-AR system.Considering this scenario, we may define this situationas an asymmetric CVE as defined by Feld & Weyers [5] Inan asymmetric CVE, they specify two user roles. A primaryis a user who controls the CVE and contributes most of thecontent. A secondary on the other hand is a user who mostlyconsumes this content and only have minor contribution. Inan typical asymmetric teleteaching CVE, a teacher would bea primary while the students would be multiple secondaries,as the teacher is providing the information the students areconsuming. In a symmetric CVE, every user has the samerole and therefore every user contributes equally. A typicalsymmetric CVE would be a virtual room for autonomousgroup work.While Feld & Weyers only found two examples for sym-metric CVEs systems for teaching [7, 10] using AR they foundno asymmetric CVE for teaching. The authors of this paperadditionally found one system where one user had anotherrole as the others [4], but this only included the ability toshow slides. Therefore, there is a lack of research regardingasymmetric CVEs with AR the scenario we suggested, wherethere is one teacher and multiple students.In addition Feld & Weyers also found no examples forasymmetric CVE where the primary uses AR to control andcontribute to the CVE. Every usage of AR in the researchfound was limited to the secondaries to turn learning into amotivational learning experience [11], but never tackles thequestion if AR could potentially improve their workflow inteaching secondaries.Therefore we identified two major research questionsabout asymmetric CVEs using AR for a system implementingthe given scenario:(1) How to design an asymmetric CVE in context of edu-cation? a r X i v : . [ c s . H C ] J a n eld Primary Secondary
VirtualEnvironment(Secondary)PhysicalEnvironment(Primary) AR Direct communication(Voice chat)
Manipulate
Get surroundings controlsperceives
Figure 1: The CVE of Augmentix (2) Can a primary benefit from AR?To answer these questions, we present the asymmetricCVE for teaching Augmentix by implementing the suggestedscenario. In this system, the primary uses AR to control theCVE and the secondaries are guided by the primary throughan virtual environment. In the prototype of Augmentix thesecondaries can have a virtual city tour in virtual reality (VR)through an antic version (400 A.D.) of the german city Trier.This city tour then is guided by a primary, who is in his officeusing AR.In this paper, CVE describes the whole environment boththe primary and the secondaries are in, therefore the pri-maries virtual and physical environment and the secondariesvirtual environment. This CVE is visualized in figure 1. Thevirtual environment of the secondary describes the environ-ment the secondary is in. In the case of Augmentix, this is avirtual representation of the antic Trier. The primaries en-vironment on the other hand is split into his/her physicalenvironment and a virtual environment for augmentation.To design and implement a system for this scenario, firstrequirements of this system get derived (section 2). Basedon these requirements, a workflow is designed (section 3),which then is implemented in a prototype (section 4). Thisprototype then was evaluated in a small expert interview(section 5). Based on the findings of this interview the resultswere discussed and concluded (section 6) and potential futurework is defined (section 7).
To implement a prototype for the given scenario, the systemmust meet certain design requirements (DR). Based on theseDRs we concluded certain technical requirements (TR). Inthe following first the DRs are defined and then the resultingTRs are listed.
Design requirements
For this scenario, secondaries need to freely move aroundthe virtual city and the primary can then focus on one sec-ondary to communicate with him and guide him throughthe city. To do this, the primary must be able to display ad-ditionally information to the secondary and manipulatingthe CVE. To allow the primary to display the correct infor-mation, he/she must know the virtual environment of thesecondary. Furthermore, the primary needs to also be ableto interact with his/her real environment to be able to testpossible enhancements in his/her workflow to answer thetwo identified research questions. Therefore we define thefollowing requirements:DR 1. Communication1. The primary and secondaries need to communicate2. The primary needs to see the secondary and his/hervirtual environment3. The primary needs to steer the attention of the sec-ondary towards certain points of interestDR 2. Interaction1. The primary needs to interact and manipulate theCVE2. The primary needs to show additional informationto the secondary3. The primary needs to use AR4. The primary needs to potentially switch betweenmultiple secondariesDR 3. Navigation1. The secondary needs to freely move inside the CVE2. The primary needs to interact with the secondaryregardless their position in the environment
Technical requirements
Based on these DRs we identified the following technicalrequirements:TR 1 Network Connection between both partiesTR 2 Communication devices for both partiesTR 3 AR-Display Device for primaryTR 4 Input Device for primaryTR 5 Tangibles for primaryTR 6 Display & Input Device for secondaryBecause the users are using the same CVE, some sort ofnetwork connection between the users is mandatory for thissystem (TR 1), to synchronize the two local environments. Tolet the primary and secondaries communicate (DR 1.1) somesort of communication device is needed (TR 2). To displaythe virtual environment to the primary (DR 1.2, DR 1.3, DR2.1, DR 2.2 and DR 3.2) a display device is needed. To furthermeet DR 2.3 this display device must must support AR (TR3). These also lead to TR 4 as the primary must also havea way to interact with the CVE. In Augmentix, we decided ugmentix - An Augmented Reality System for asymmetric Teleteaching to use a common smartphone as an AR display and inputdevice. This also meets TR 2 as the built in microphone can beused. In future work this system can be extended to supportan AR-HMD (eg. Hololens 2), as this allows an immersiveexperience and supports full hand tracking, which couldfurther improve the primaries workflow.To let the primary interact with the CVE (DR 2.1) whileusing AR (DR 2.3) to interact with his real environment theauthors used tangibles (TR 5), as they can be used as anphysically-based interface metaphor [2].To meet DR 3.1 the secondary needs a display and inputto move freely inside the CVE. In this system, he/she usesvirtual reality to immerse him-/herself into and interact withthe CVE (TR 6). As research showed VR, like AR, can createa motivational learning experience [3] and therefore theauthors decided to implement the secondaries site with VR. ”Augmentix” was designed with support for multiple secon-daries in mind. While the technical implementation onlysupports one secondary in this prototype, the workflow wasdefined for multiple secondaries (DR 2.4).When the secondary starts Augmentix he/she is directlyput into the virtual environment and can start freely explor-ing via the teleport metaphor. Because the teleport metaphordoes only allow to teleport for a short distance, the sec-ondary does have a map of the virtual environment attachedto his/her left hand. This map can be shown and hidden viapressing a button and show buttons for every major POI inthe virtual environment. By clicking one of those button onthe map, the secondary is teleported to this POI. With theshort teleport and the teleport to POIs DR 3.1 is fulfilled.The primary site of the CVE on the other hand was di-vided into two major parts. The ”Map-Hub” and the ”Pickup-Shovel”. The Map-Hub should give an overview to the pri-mary about the position of the secondaries and allow himto pick one which he/she wants to further interact with.The primary can return to the Map-Hub at any time for anoverview or to focus on another secondary. To meet DR 2.3the Map-Hub was realized with AR in mind. Therefore a realmap of the antic Trier was augmented and the secondariesare displayed with a arbitrary avatar on their position insidethe city. The primary can walk up upon this map and cansee all the secondaries and their position in the city. Figure2a shows the Map-Hub with the position of a secondary andthe corresponding view of the secondary standing in frontof the ”Porta Nigra” is shown by figure 2b.Any secondary can now be picked up by using a shovel-metaphor with the ”Pickup-Shovel”. Therefore the primarycan hold the Pickup-Shovel next to a secondary on the Map-Hub to pick him/her up. If this action was performed success-fully, the secondary is no longer displayed on the Map-Hub (a) The Map-Hub as seen by the primary (secondary markedwith a red arrow(b) The corresponding view of the secondaryFigure 2: The Map-Hub and the corresponding view of thesecondary but on the Pickup-Shovel. This allows the primary to movethe secondary anywhere in his/her real physical environ-ment. For example by placing the Pickup-Shovel on his deskthe primary can now view and interact with the secondarywhile sitting in front of his desk, instead of standing in frontof the Map-Hub. Furthermore, the virtual environment ofthe secondary is now also displayed to the primary (DR 1.2).A voice chat can now be started by the primary via atoggle-button on his/her UI, so that both users can commu-nicate with each other (DR 1.1). To keep an overview of theenvironment the primary can adjust the scale via a slider. Ifthe primary wants to have an overview of the surroundingshe/she can scale the environment down and if the wantsto interact with a smaller object he/she can scale the envi-ronment up to allow more precise interactions. By defaultthe primaries position remains on the center of the pickup-shovel. Therefore if the secondary moves around in his/herenvironment on the primaries view the environment is mov-ing rather than the secondary. As this allows the primary tokeep focus on the secondary, this mode makes it difficult tointeract with the environment while the secondary is mov-ing. Therefore, the primary can change the camera-mode toremain relative to the environment rather than following the eld
Figure 3: Interface for primary while interacting with a sec-ondary secondaries position with an toggle-button in the interface.This allows the primary to interact with the secondary re-gardless of his/her position in the virtual environment (DR3.2). Figure 3 shows the primaries interface when interactingwith the secondary. The toggle-button ”Audio Stream” onthe top right enables or disables the voice chat, the toggle-button ”LockCam” on the bottom left changes the cameramode and the slider on the bottom left allows changing thescale. The red ”X”-button removes the secondary from thePickup-Shovel and places him/her back into the Map-Hub,so that the primary can pickup another secondary. The sec-ondary currently does not have any indication if he/she iscurrently picked up or at the Map-Hub. This could changein future work, as an indication could possibly change thesecondaries behaviour and motivation to explore.To allow interaction with the surroundings the environ-ment contains ”objects of interest” (OOI). By tapping on oneof these OOI, on the smartphone a context menu appearsthat enables various interactions. These interactions vary de-pending on the OOI selected, as they can be turned on and offand edited for every OOI beforehand. Current implementedinteractions are: • Display Text • Display Video • Scale • Highlight • Change • LockThe first two interactions are for displaying additional in-formation about the OOI (DR 2.2). When clicking on eitherthe ”Text” or the ”Video”button, a text-box or a video screenappears in front of the secondary. The text and the videomust also be set beforehand. With the ”Scale” interaction en-abled, the primary can adjust the scale of the OOI via a slider.”Highlight” draws an outline around the OOI for both usersand displays an arrow to the secondary pointing towards theOOI. This arrow gets removed when the secondary is next Design Requirements SolutionsDR 1.1 Voice chatDR 1.2 Pickup-ShovelDR 1.3 Highlight-InteractionDR 2.1 Interactions and OOIsDR 2.2 Display Text/VideoDR 2.3 Map-Hub, Pickup-Shoveland TangiblesDR 2.4 Map-HubDR 3.1 Teleport and POI-TeleportDR 3.2 Multiple Camera modes
Table 1: Solutions to the design requirements to the OOI and is looking towards it. This interaction can beused by the primary to steer the focus of the secondary (DR1.3). The ”Change” interaction allows the primary to changethe OOI into another OOI via a dropdown menuTo let the primary create new OOIs for example to cre-ate context related OOIs to substantiate facts, Augmentixuses Tangibles. As seen in figure 4a when placed within theview port of the smartphones camera, the tangible gets high-lighted on the screen and the primary can tap on the tangibleto display a dropdown menu. Figure 4b shows the dropdownmenu with the list of all possible OOIs the primary can create.By selecting one of the corresponding OOI, it gets createdin the CVE, thus visible for both users. If the primary nowphysical moves the tangible around the primary, the createdOOI now moves in the secondaries environment simultane-ously. Therefore the primary can now place new OOIs nextto other OOIs to compare these two.Figure 4 shows this workflow of creating a new OOI via atangible.With the last interaction ”Lock”, the primary can lock theposition of the OOI, so that it does not longer follow themovement of the tangible. This is done via a click on thelock-symbol. The primary then can use the tangible to addanother OOI. A locked OOI can also then be removed by ared trashcan-button. If an empty tangible is held next to anOOI it can be attached back onto the tangible by clickingthe lock symbol again. Figure 4c does show the OOI "PortaNigra" currently attached to an tangible with the interac-tions ”Display Text”, ”Display Video”, ”Scale”, ”Change” and”Lock” enabled. With the tangibles and the implemented in-teractions the primary can interact and manipulate the CVEand therefore DR 2.1 and DR 2.3 are met. A summary andoverview of the solutions for the DRs are given in table 1.With this proposed workflow all DRs are met. The nextsection does cover the technical implementation of this work-flow. ugmentix - An Augmented Reality System for asymmetric Teleteaching (a) Tangible without an attached OOI(b) The dropdown menu to select a new OOI(c) The context menu of the new OOIFigure 4: Workflow of the Tagibles
This system was implemented using the ”Unity” 3d gameengine [12]. Unity allows to create 3d games for variousplatforms like mobile devices (android, iOS) or desktop pcs(windows, linux, macOS).
Augmented reality
The augmented reality part of Augmentix is implementedwith ”Vuforia” [8] and therefore these definitions only focuson marker-based tracking. An ”augmentation entity” de-scribes an entity which is in a CVE used for AR. This entityconsists of two major parts: a physical prop and a virtualrepresentation in unity. The physical prop gets tracked inthe real physical environment and its position and rotationgets mapped onto the virtual representation to let the virtual
Figure 5: The physical props with their markers attached representation move like the physical prop does. In Augmen-tix the physical props are just images printed out, also called”Markers”. The virtual representation of these markers arecalled ”Image Targets” in Vuforia. In Augmentix there arefive augmentation entities: • The Map-Hub • The Pickup-Shovel •
3x TangiblesTherefore 5 unique markers were needed. These where ran-domly chosen images from the internet, themed in the anticage. As Vuforia allows to test images for their ability to beused as markers. Only images with the highest rating wereused to minimize tracking issues. These images were thenprinted and stuck onto a map of Trier (Map-Hub), as smallwooden shovel (Pickup-Shovel) or three small wooden chips(Tangibles) to enable physical interactions as seen in figure5. To be able to scale all the game objects of one image targeta dummy game object named ”Scaler” is created as a childof each image target. Each game object for augmentationnow gets attached as a child of the Scaler. If the Scaler is nowscaled or moved, all the underlying game objects are scaledor moved aswell. This can be used to create an offset of theaugmentation or to scale it all at once.To implement the Map Hub one of these images was put ontop of a real map. First test to use the map itself as a markerfailed, as it has to few feature points, which are needed tobe a good augmentable image [9]. Because the position ofthe secondaries in the Map Hub should not be based on theposition of the image but rather on the position of the map,the offset between the center of the image and the originin the secondaries virtual environment must be consideredwhen placing Game Objects.The offset can be created my moving the Scaler to the po-sition the origin of the virtual environment of the secondaryis on the map. A visualization of this can be seen in figure 6 eld
Figure 6: Visualisation of the origin of the Map Hub where the center of the image is marked red, the origin ofthe virtual environment of the secondary is marked blue andthe position of the player green. The brown line thereforeshows the final offset from the image to the players position.The origin of the virtual environment of the secondary hasbeen arbitrarily been chosen when the virtual representationof Trier was created.Another image is also attached to the physical PickupShovel. When the image gets tracked by the primary holdingthe Pickup Shovel in the cameras view, the distant of thecorresponding Image Target and the secondary on the MapHub get calculated and if it is under a predefined thresh-old the secondary gets put as a child of the Scaler of thePickup Shovel. Therefore the secondary is now followingthe Pickup Shovels position and the primary can now placethe secondary somewhere else.The Tangibles are similar to the Pickup Shovel regardingtheir implementation with AR. An image is attached to awooden chip to be able to track it. When creating a OOInow as a child of the corresponding image target, the OOI isfollowing the Tangibles position.
Interface
The interface was implemented with the standard UI pro-vided by Unity. These include common input types, liketext-fields, buttons, drop-down menus and sliders. The in-terface is divided into two parts. While the ”AR-UI” is usedfor any interactions with the virtual environment and caneasily be implemented with the standard UI, the ”OOI-UI” isused to interact with a selected OOI. Therefore the OOI-UIalways in context to an specific OOI. To visualize this contextthe OOI-UIs position is always following the OOIs position.Because the OOIs position is in 3D space and the OOI-UIsposition is on the 2D screen-plane a projection based on thecameras projection matrix has to be made. Figure 4c doesshow the positioning of the OOI-UI in context to the selected OOI. Because the amount of button representing differentinteractions (see section 3 is dependent on the OOI selectedthe positions of these buttons must adjusted according thisamount. Therefore the buttons get arrange in a circle aroundthe OOI. The distance of the arc between the different but-tons is dependent on their amount. The final equation forthe position is: x = Radius * Screen.width / 100* Math.Cos(2 * i * Math.PI / Buttons.Count)y = Radius * Screen.height / 100* Math.Sin(2 * i * Math.PI / Buttons.Count)
Where
𝑅𝑎𝑑𝑖𝑢𝑠 is a predefined constant which defines thepercentual radius of the circle in proportion to the givenscreen space and 𝑖 is the index of the current Button. Network
For the communication between the users the framework”Photon” by Exit Games was used [6]. This framework in-cludes various networking-libraries for different aspect ofnetwork communication. We used ”PUN” for the synchroni-sation of and interaction with the virtual environment and”Voice” for the implementation of the voice chat. Each gameobject that should be synchronized with PUN must havea ”PhotonView”-Component. This component handles theglobal indexing of these objects in the network and definesan owner of this game object. If various properties or compo-nents of this game object should be synchronized additionalviews can be added. These views then define which prop-erties of a game object are synchronized and how they aresynchronized. One game object can have multiple views han-dling various properties. Therefore a game object which syn-chronizes its transform and for example a health-componentcan do this via a transform-View and a health-View wherethe synchronization is handled. Some of these additionalviews, like a transform view are already provided. Becausethe original transform-view only handles the global positionof objects and in the primaries virtual environment all syn-chronized objects are children of image targets, this viewhad to be adjusted to synchronize local positions rather thanglobal positions. A custom transform-view was also neededto synchronize the positions of OOIs attached to a Tangible.Because the OOI is a child of the tangible and therefore inanother coordinate system, its local position must first becalculated into the local coordinate system of the virtual en-vironment of the secondary. Thus the position of the OOI cancorrectly be set in the virtual environment of the secondary.This projections into different local coordinate systems areprovided by Unity.Because all game object used must be available to all users,every game object synchronized over the Network, which isnot a part of the scene from the beginning, must be defined ugmentix - An Augmented Reality System for asymmetric Teleteaching as a prefab-asset in the ”Resource” folder of the project. Is isespecially necessary because Unity does optimize the systemwhen build, by removing all unused assets. Therefore if anasset is not used in a scene it will get removed by unity andcan’t be used at runtime. An exception to this behaviour isthe ”Resource” folder. This one does get ignored by unity andevery asset inside this folder is available at runtime. Theseassets include: • A player avatar • the virtual representation of the city • every OOI creatable with a TangibleA player avatar get created for each user every time a newsecondary is starting their application and synchronize theirtransform with each user. Additionally a instance of the vir-tual representation for the city gets created for the secondaryto walk in and for the primary. On the primary site this repre-sentation gets hidden first and then is later used, by making itvisible again, in the Pickup-Shovel to show the surroundingsof the secondary to the primary. To validate the system and the workflow we conducted anexpert interview. In this interview six computer scientistseparately were first introduced into the concept and thegoal of this system. Then they could freely try the secon-daries site of this system to get familiar with the secondariesvirtual environment and interactions. After that they gotintroduced into the workflow and the system of the primaryand then could freely try the primaries site and get used tothe workflow and all interactions provided by Augmentix.Subsequently they were asked to perform specific tasks toguide a student through a staged virtual city tour. These taskscan be found in the attachments. The secondary was hereplayed by one of the authors of this paper and intentionallyforced the expert to use all provided interactions. After thetasks were successfully completed the expert was asked aeight questions about his/her experience. These questionsaimed at the functionality of the system and the advantagesand disadvantages the usage of AR. These questions can befound in the attachments. The answers to these questionswere anonymously recorded, transcribed and translated fromgerman to english afterwards.While all experts think that the usage of AR as a primaryis very intuitive, immersive and motivational
𝐼.𝑎,𝐼.𝑏 , they alsothink that at the current stage this system is to inconve-nient to use for an efficient and long term usage
𝐼.𝑐 . Onemajor factor for this inconvenience was the usage of a smart-phone
𝑉 .𝑏,𝑉 .𝑐 . To view the virtual content the smartphonemust be always held in view and therefore all the interac-tions can only be performed with a single hand. As this flaw was suspected and therefore two questions specifically fo-cused on this topic, the expert interview confirmed that thesmartphone had an significant negative impact on the per-formance and experience of the primary. However becauseof the intuitive interactions three experts stated that theysee a big advantages for users as primary who are no expertsin computer systems in comparison to a potential systemusing a common desktop environment
𝐼.𝑏 . But on the otherhand a system using a desktop environment would probablybe more efficient for users once they get familiar with thissystem. To counter these problems five experts suggestedthe use of an AR HMD as this would allow the primary touse both hands for interactions. Three experts further sug-gested a stand, like a tripod, for the smartphone if no HMDis available.The workflow of the Map-Hub and the Pickup-Shovel wasaccording to all experts easy to understand and convenient touse
𝐼𝐼.𝑎,𝐼𝐼.𝑐 . However all experts think that this workflow doesnot scale well with multiple secondaries
𝐼𝐼.𝑐 . While the pickup-metaphor was easy to understand it would be too tricky andimprecise to use and the overall overview would suffer whenhandling multiple secondaries. Two experts also stated thatthey think merging the Map-Hub and the Pickup-Shovel intoone plane could improve this workflow, as both work spacesare then always in the field of view of the primary
𝐼𝐼.𝑏 .The interface and the interactions were received well byall experts. The interface was clear and easy to understand.Only the buttons sometimes overlapped and then were hardto click on
𝑉 .𝑐 . No expert missed an interaction method whileperforming the tasks. The usage of tangibles were also de-scribed as intuitive and easy, merely the physical prop wasperceived as too clunky to perform precise interactions withthem
𝐼𝐼𝐼.𝑎 . Furthermore two experts were bothered by therepeating context switch between the physical tangiblesand the interface of the smartphone
𝐼𝐼𝐼.𝑏,𝐼𝐼𝐼.𝑐 . Further twoexperts stated that the secondaries view direction was notclear enough
𝑉 .𝑑 . As potential new features three expertsproposed a 3D pointing interaction for the primary to high-light not only objects but to point on specific areas in theenvironment, like a laser pointer
𝐼𝑉 .𝑏 . Furthermore, three ex-perts proposed a forced rotation on the secondary to guidehis/her focus additionally to the already implemented high-light interaction
𝐼𝑉 .𝑎 . A support of multi-touch gestures, like apinch gesture, were proposed by five experts, as it would usethe already known gestures used my common smartphoneapplications
𝐼𝑉 .𝑐 . Furthermore three experts had technicalproblems, as the system did not always detect the images forthe Image Targets, because they unintentional blocked theview of these images while interacting with the tangibles.Because in the interview the primary and the secondarywere in the same room the usage of the provided voice chat eld
Usage of AR E2: "... can better assess the behaviour of the user, because I am more immersed" I.aE3: "[The interaction] is very natural ... therefore i think for normal city tour guides [thissystem] is better, than a desktop environment ..." I.bE4: "to place the [objects] in the view of the player was quite fiddly ... i think this will beeasier with a mouse" I.cWorkflow E1: "... it was cool ... nothing to complain." II.aE5: "I think it is difficult with two planes (work spaces) ... you could project the Map-Hubonto the desk" II.bE2: "I like the metaphor, but i don’t think it does scale well [with multiple students]." II.cInteraction E1: ".. can not put tangibles very close to each other in the real world." III.aE2: "It was confusing ... to change the interaction space." III.bE5: "I always tried to click on the [physical] tangibles. III.cE1: "[The interface] was quite intuitive ..." III.dFeatures E3: "I think ... a rotation ... could be very helpful and make [the guidance] easier. " IV.aE2: "Pointing would be a nice feature, like ’Hey, look here’ IV.bE5: "What could be more intuitive is if the scaling could be done via a [multitouch]gesture IV.cTechnical Aspects E2: "I see a big disadvantage in the mobile phone ..." V.aE3: "Depending on the amount of students ... i think two [free] hands are clearly superior." V.bE1: "The buttons are really close to each other" V.cE6: "I couldn’t really see where you were facing" V.d
Table 2: Expert feedback was not needed. Nevertheless all experts stated that a voicechat would probably be sufficient to communicate with thesecondary in a distributed setup and don’t see any necessityfor additional communication channels. Two experts statedhowever that the secondary could possible benefit from avideo chat, because he could be feel more engaged with theprimary when seeing him via a video chat.
While the expert interview showed, that the usage of AR asa primary does potentially provide a intuitive and immer-sive experience, in the current stage it does not enhance theworkflow of the primary in comparison to a potential systemusing a common desktop environment. One major drawbackwas the usage of a common smartphone as an input and dis-play device. Therefore the technical implementation of ARhas a huge impact on the experience of the primary. Further-more the proposed workflow was well received in terms ofdesign and usability, but, according to the expert interview,will not scale well with multiple students. This leads to theconclusion, that the workflow must not only be designedin terms of usability and complexity, but must be efficientand scaleable, aswell. Therefore the current workflow doesnot provide the desired enhanced experience for the primary.The interface was well received and the interaction methodswere easy to understand, therefore only minor tweaks arenecessary to optimise these aspects. But the interface and the interaction methods heavily depend on the implementationof AR. In this first prototype these were designed for a smart-phone, but if AR is implemented with a HMD the interfaceand the interaction methods must be completely redesigned.In terms of communication channels the provided voice chatwas stated as sufficient and therefore no additional channelsare needed.These results lead to the conclusion that this first pro-totype of Augmentix is not sufficient enough to tackle thetwo research questions identified in section 1. While the ex-perts stated that this system is easy, intuitive and immersive,no positive aspects on performance were identified. But wethink that all the negative aspects can be addressed and apositive aspects in performance can be achieved by the usageof AR.
In potential future work first the negative technical aspectsshould be addressed. The authors suggest research whichimplements AR with a HMD to allow a more immersive andintuitive interaction. Further the tangibles could be upgradedto allow more physical interactions. A physical button orswitch could minimize the need of context switches, whichhad a negative impact in this first prototype of Augmentix.In addition a more scalable and efficient workflow must bedesigned. Though the current workflow is easy and intuitiveit was stated as not scalable and efficient by the experts. To ugmentix - An Augmented Reality System for asymmetric Teleteaching achieve significant results the potential new workflow mustbe validated in a study with multiple secondaries in onesitting.To finally validate such a system using AR to enhancethe primaries workflow, a desktop application for guiding asecondary through a virtual city must be designed. As such asystem is needed to verify the results claimed about a systemusing AR.
REFERENCES [1] Mark Billinghurst, Adrian Clark, and Gun Lee. 2015. A Survey ofAugmented Reality.
Foundations and Trends® in Human–ComputerInteraction
8, 2-3 (2015), 73–272. https://doi.org/10.1561/1100000049[2] Mark Billinghurst, Hirokazu Kato, and Ivan Poupyrev. 2008. Tangibleaugmented reality.
ACM SIGGRAPH ASIA 2008 Courses (01 2008).https://doi.org/10.1145/1508044.1508051[3] Christoph Borst, Nicholas Lipari, and Jason Woodworth. 2018. Teacher-Guided Educational VR: Assessment of Live and Prerecorded TeachersGuiding Virtual Field Trips. 467–474. https://doi.org/10.1109/VR.2018.8448286[4] Matt Bower, Mark J. W. Lee, and Barney Dalgarno. 2017. Collaborativelearning across physical and virtual worlds: Factors supporting andconstraining learners in a blended reality environment.
British Journalof Educational Technology
48, 2 (3 2017), 407–430. https://doi.org/10.1111/bjet.12435[5] Nico Feld and Benjamin Weyers. 2019. Overview of CollaborativeVirtual Environments using Augmented Reality. In
Mensch und Com-puter 2019 - Workshopband
Learning Technologies, IEEE Transactions on
ECIS . 31.[12] Unity Technologies. last visited 29.10.2019. Unity 2019.3.0b4. http://unity.com.[13] Feng Zhou, Henry Duh, and Mark Billinghurst. 2008. Trends in Aug-mented Reality Tracking, Interaction and Display: A Review of TenYears of ISMAR.2008 7th IEEE/ACM International Symposium on Mixedand Augmented Reality