Digital Transformations of Classrooms in Virtual Reality
Hong Gao, Efe Bozkir, Lisa Hasenbein, Jens-Uwe Hahn, Richard Göllner, Enkelejda Kasneci
DDigital Transformations of Classrooms in Virtual Reality
Hong Gao ∗ Efe Bozkir ∗ [email protected]@uni-tuebingen.deHuman-Computer InteractionUniversity of TübingenTübingen, Germany Lisa Hasenbein
Hector Research Institute ofEducation Sciences and PsychologyUniversity of TübingenTübingen, [email protected]
Jens-Uwe Hahn
Hochschule der Medien StuttgartStuttgart, [email protected]
Richard Göllner
Hector Research Institute ofEducation Sciences and PsychologyUniversity of TübingenTübingen, [email protected]
Enkelejda Kasneci
Human-Computer InteractionUniversity of TübingenTübingen, [email protected]
Figure 1: Immersive virtual reality classroom.
ABSTRACT
With rapid developments in consumer-level head-mounted displaysand computer graphics, immersive VR has the potential to take on-line and remote learning closer to real-world settings. However,the effects of such digital transformations on learners, particularlyfor VR, have not been evaluated in depth. This work investigatesthe interaction-related effects of sitting positions of learners, vi-sualization styles of peer-learners and teachers, and hand-raisingbehaviors of virtual peer-learners on learners in an immersive VRclassroom, using eye tracking data. Our results indicate that learnerssitting in the back of the virtual classroom may have difficulties ex-tracting information. Additionally, we find indications that learnersengage with lectures more efficiently if virtual avatars are visualized ∗ Both authors contributed equally to this research.
CHI ’21, May 8–13, 2021, Yokohama, Japan © 2021 Association for Computing Machinery.This is the author’s version of the work. It is posted here for your personal use. Notfor redistribution. The definitive Version of Record was published in
CHI Conferenceon Human Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan ,https://doi.org/10.1145/3411764.3445596. with realistic styles. Lastly, we find different eye movement behav-iors towards different performance levels of virtual peer-learners,which should be investigated further. Our findings present an im-portant baseline for design decisions for VR classrooms.
CCS CONCEPTS • Human-centered computing → Empirical studies in HCI ; Virtual reality ; •
Computing methodologies → Perception ; Sim-ulation environments . KEYWORDS immersive virtual reality, eye tracking, education, perception, avatars
ACM Reference Format:
Hong Gao, Efe Bozkir, Lisa Hasenbein, Jens-Uwe Hahn, Richard Göllner,and Enkelejda Kasneci. 2021. Digital Transformations of Classrooms inVirtual Reality. In
CHI Conference on Human Factors in Computing Systems(CHI ’21), May 8–13, 2021, Yokohama, Japan.
ACM, New York, NY, USA,10 pages. https://doi.org/10.1145/3411764.3445596 a r X i v : . [ c s . H C ] J a n HI ’21, May 8–13, 2021, Yokohama, Japan Gao and Bozkir, et al.
Recently, many universities and schools have switched to onlineteaching due to the COVID-19 pandemic. Online and remote learn-ing may become more prevalent in the near future. However, oneof the disadvantages of teaching and learning in such ways com-pared to conventional classroom-based settings is the limited socialinteraction with teachers and peer-learners. As this may demoti-vate learners in the long term, better social engagement providingsolutions such as immersive virtual reality (IVR) can be used forteaching and learning. Next-generation VR platforms such as En-gage or Mozilla Hubs may offer better social engagement forlearners in the virtual environments; however, the effects of suchenvironments on learners have to be better investigated. In addi-tion to the opportunity to provide more efficient social engagementconfigurations, VR also enables building and evaluating situationsthat are difficult to set up in real life (e.g., due to the privacy-relatedconcerns or current availability).While VR technology has a long history in the education do-main [22, 48], the current availability of consumer-grade head-mounted displays (HMDs) allows for the creation of immersiveexperiences at a reasonable cost, making it possible to employ im-mersive personalized VR experiences in classrooms in the near fu-ture [13]. However, the digital transformations of classrooms reflectan important and critical step when developing VR environmentsfor learning purposes and require further research. A unique oppor-tunity to understand the gaze-based behavior, and consequently,attention distribution of learners in such VR settings is providedthrough the analysis of the eye movement of learners [49]. Sincesome of the high-end HMDs already consist of integrated eye track-ers, it does not require extensive effort to extract eye movementpatterns during simulations in VR. A thorough analysis of the eyemovements allows to infer information on the users going beyondthe gaze position, for example stress [24], cognitive load [12], visualattention [11], evaluation and diagnosis of diseases [46], future gazelocations [27], or training evaluation [35]. In the virtual classroom,this rich source of information could even be combined with thevirtual teachers’ attention, similar to real-world classrooms [21, 59],to design more responsive and engaging learning environments.In this study, we design an immersive VR classroom that is similarto a real classroom, enabling students to perceive an immersivevirtual classroom experience. We focus on exploring the impactof the digital transformation from the classroom to immersive VRon learners by analyzing their eye movements. For this purpose,three design factors are studied, including sitting positions of theparticipating students, different visualization styles of the virtualpeer-learners and teachers, and different performance levels ofvirtual peer-learners with different hand-raising behaviors. Figure 1shows the overall design of the virtual classroom. Consequently,our main contributions are as follows. • We design an immersive VR classroom and conduct a userstudy to enable students to virtually perceive “interactive”learning. https://engagevr.io/ https://hubs.mozilla.com/ • We analyze the effect of different sitting positions on learn-ers, including sitting in the front and back. We find signifi-cantly different effects in fixation and saccade durations, andsaccade amplitudes in relation to the sitting position. • We evaluate the effect of different visualization styles ofvirtual avatars on learners including cartoon and realisticstyles and find significantly different effects in fixation andsaccade durations, and pupil diameters. • We assess the effect of different performance levels of virtualpeer-learners on learners by evaluating various hand-raisingpercentages, and find significant effects particularly in pupildiameters and number of eye fixations.
As head-mounted displays (HMDs) and related hardware becomemore accessible and affordable, VR technology may become animportant factor in the educational domain, particularly given itsprovided immersion and potential for teaching [18, 62]. Variousrecent works on VR and education indicate that VR may offer sig-nificant advantages for learning and teaching. For instance, basedon the post-session knowledge tests, both augmented and virtualreality (AR/VR) are found to promote intrinsic benefits such asincreasing learners’ immersion and engagement when used forlearning structural anatomy [42]. In [2], the impact of VR systemson student achievements in engineering colleges was investigatedby evaluating the results of post-quizzes and the results show thatVR conditions present significant advantages when compared tono-VR conditions since students improve their performance, whichindicates that VR can successfully support teaching engineeringclasses. Additionally, VR was also evaluated to help teachers developspecific skills that can be helpful in their teaching processes [33]. Inaddition to teaching and learning processes, another aspect underevaluation concerns the types of virtual environment configurationsthat are used not only for learning, but also for exploring immer-sion, motivation, and interaction. To this end, different types of VRsetups have been studied. [13] introduced an immersive VR tool tosupport teaching and studying art history, which indicates, whenused for high-school students, an increased motivation towards arthistory. [50] explored the possibility of using low-cost VR setupsto improve daily classroom teaching by using a smartphone-basedVR system. According to the evaluations using pre- and post tests,the proposed VR setup helps students perform better comparedto traditional teaching using whiteboard and slides. Furthermore,HMD-based VR environment was studied in an elementary class-room for teachers to guide their students in exploring learningelements in immersive virtual field trips [14]. It has been concludedthat students’ motivation was enhanced after the virtual field trips.Overall, such works imply that while increasing motivation andengagement, different types of VR environments provide plenty ofbenefits and can be used to assist learning and teaching processesby providing users with immersive experiences.One disadvantage of such VR and online learning tools is thatlearners’ motivation and performance may be affected by lack ofsocial interaction [26], peer accompaniment [39], or immersion [45].Furthermore, realism in immersive environments can have vari-ous implications [23], related to both learning and interaction. To igital Transformations of Classrooms in Virtual Reality CHI ’21, May 8–13, 2021, Yokohama, Japan address these issues, several works have focused on how to pro-vide more realistic and immersive environments. For example, [56]discusses the design of the VR environments for classrooms by repli-cating real learning conditions and enhancing learning throughreal-time interaction between learners and instructors. Further-more, [36] constructed virtual classmates by synthesizing previouslearners’ time-anchored comments and indicates that when stu-dents are accompanied by a small number of virtual peer-learnersbuilt with prior learners’ comments, their learning outcomes areimproved. In addition to virtual peer-learners, the presence of vir-tual instructors may also have an impact on learning in VR. [58]investigated this and reports that learners engaged more with theenvironment and progressed further with the interaction promptswhen a virtual instructor was provided. These works and findingsindicate that the styles and types of virtual agents in the virtualenvironments may have several effects on students’ attention andperception during immersion and should be taken into account.The evaluation of real-time visual attention towards similar con-figurations, which could be carried out using sensors such as eyetrackers, may not only help to understand learning processes butalso provide empirical insights about interactions during virtualclasses for digital transformations of classrooms in VR.From immersion and interaction point of view, video telecon-ferencing systems share similar goals with the VR classrooms assuch systems enable people to experience highly immersive andinteractive environments [31] and have been studied in the VRcontext as well. For example, [32] proposed a video teleconferenceexperience using a VR headset and found that the sense of immer-sion and feeling of presence of a remote person increases with VR.Furthermore, different mixed reality (MR)-based 3D collaborativemediums were studied in terms of teleconference backgrounds anduser visualization styles [29]. The real background scene and real-istically constructed avatars promote a higher sense of co-presence.Low-cost setups were investigated also for real-time VR telecon-ferencing [34], as it was done for VR learning environments andit is found that it is possible to improve image quality using head-sets in these setups. The possibility of having low-cost setups maybecome an important factor in the future when accessibility andextensive usage of everyday VR environments for learning [2] andinteraction [56] are considered.In general, while the visualization styles and rendering are con-sidered to affect learners’ perception and attention, in virtual learn-ing environments particularly in IVR classrooms, other design fac-tors are also important for attention-related tasks. For instance, [7]has studied the effect of being closer to the teacher, being in theteacher’s field of view (FOV), and the availability of virtual co-learners in virtual classrooms. In particular, the authors found thatstudents learn more if they are closer to the teacher and by beingin the center of the teacher’s FOV. In addition, when no co-learnersor co-learners who have positive attitudes towards the lecture (e.g.,looking at the teacher or taking notes) are available, students learnmore information about the lecture instead of the virtual room.Gazing time was approximated according to the time students keptthe virtual teacher in their FOVs; however, real-time gaze informa-tion was missing during the experiments. Exact gazing patternsand different eye movement events during learning are particularlyneeded for understanding moment-to-moment visual behaviors of students. In another work, [9] studied the effect of the sittingposition on attention-deficit/hyperactivity disorder (ADHD) expe-riencing students in such classrooms and found indications thatfront-seated students are affected positively by this configurationin terms of learning. However, similar to [7], the authors did nothave gaze information available but identified that the evaluationof eye movements may provide additional insights during learning,particularly in terms of real-time visual interaction, when learningand cognitive processes are taken into consideration. In addition,eye movements are also considered as choice of measurementsto study visual perception during learning [25, 28]. [17] and [55]have studied attention measures and social interaction in similarsetups using continuous performance tests and head movements,respectively. The latter work has used head movements as a proxyfor visual attention and found that head movements shift betweentarget and interaction partner. This finding partly supports the find-ing of [58] that the learners’ engagement increases when a virtualinstructor is presented. However, both works lack eye movementmeasurements. As also reported by [55], eye movements shouldbe examined along with head movements to understand attentionand interaction more in-depth, since eyes can move differently. Inaddition, [44] studied the relationship between performance, senseof presence, and cybersickness, whereas [38] examined attention,more particularly ADHD with continuous performance task in avirtual classroom. However, both works are more in the clinicaldomain, which are relatively different from an everyday classroomsetup. [51] provides a general overview more from clinical perspec-tive. Lastly, although has not been studied extensively in VR yet,peer-learners’ engagement expressed by hand-raising behavior [10]may also affect the attention and visual behaviors of learners in theVR classrooms, which could be further studied.In summary, while showing that VR could be a useful technologyto support education, the aforementioned works primarily focusedon the importance of used mediums and configurations, visualiza-tion styles, participant locations for visual attention, engagement,motivation, and learning of participants in VR classrooms. Yet, real-time and moment-to-moment interactions with the environmentand visual behaviors of students in an everyday VR classroom setupwere not studied in depth. Although obtaining such informationin real-time is challenging, analyzing eye-gaze and eye movementfeatures can provide valuable understanding into visual attentionand interaction in a non-intrusive way, especially for designingsuch classroom configurations. For instance, long fixations can berelated to the increased amount of cognitive process [30], whereaslong saccadic behaviors are related to inefficient search behav-ior [19]. Furthermore, pupillometry is highly related to cognitiveworkload [3, 4]. Such information is also argued for considerationin IVR environments [5, 6]. In fact, when designing immersive VRenvironments for digital transformations of classrooms in virtualworlds, such features can be key to understand visual attention,cognitive processes, and visual interactions towards different class-room manipulations, which may also affect learning and teachingprocesses. To address this research gap, we study three config-urations in an everyday VR classroom setup including differentvisualization styles of virtual avatars, sitting positions of partici-pants, and hand-raising based performance levels of peer-learnersby using eye movement features.
HI ’21, May 8–13, 2021, Yokohama, Japan Gao and Bozkir, et al. (a) Back sitting participant experiencing the VR classroom. (b) Front sitting participant experiencing the VR classroom.(c) Cartoon-styled avatars. (d) Realistic-styled avatars.
Figure 2: Views from the immersive virtual reality classroom.
The main purpose of our study is to investigate the effects of digitaltransformations of the classrooms to VR settings on learners. There-fore, we designed a user-study to study these effects. In this section,we discuss the participant information, apparatus, experimentaldesign, experiment procedure, measurements, data pre-processingsteps, and our hypotheses. Our study and data collection were ap-proved by the institutional ethics committee at the University ofTübingen (date of approval: 25/11/2019, file number: A2.5.4-106_aa)as well as the regional council responsible for educational affairs atthe district of Tübingen.
Participants were recruited from local academic track schools viae-mails and invitation letters. After obtaining written informedconsent from both students and their parents or legal guardians,all students who indicated interest were admitted to the study.381 volunteer sixth-grade students (179 female, 202 male), whoseages range from 10 to 13 ( 𝑀 = . 𝑆𝐷 = . 𝑀 = . 𝑆𝐷 = . 𝑆𝐷 = . 𝑀 = . 𝑆𝐷 = .
52) sixth-gradestudents (20 female, 35 male).
In our experiments we employed HTC Vive Pro Eye devices with arefresh rate of 90 Hz and a field of view of 110 ◦ . The VR environmentwas designed and rendered using the Unreal Game Engine v4.23.1.The screen resolution for each eye was set to 1440 × . ◦ − . ◦ accuracy. The virtual classroom designed in our study has 4 rows and 2columns of desks along with chairs, as well as other objects whichtypically exist in the conventional classrooms such as a board anddisplay. In total, there are 24 virtual peer-learners sitting on thechairs. A virtual teacher standing in front of the classroom teachesa ≈ (a) topic introduction ( ≈ (b) knowledge input ( ≈ . (c) exercises ( ≈ . (d) summary ( ≈ . igital Transformations of Classrooms in Virtual Reality CHI ’21, May 8–13, 2021, Yokohama, Japan Table 1: Head and eye movement event identification thresholds.
Event Conditions for velocity ( 𝑣 ) Conditions for duration ( Δ )Stationary HMD 𝑣 ℎ𝑒𝑎𝑑 < ◦ / 𝑠 -Fixation 𝑣 ℎ𝑒𝑎𝑑 < ◦ / 𝑠 and 𝑣 𝑔𝑎𝑧𝑒 < ◦ / 𝑠 𝑚𝑠 < Δ 𝑓 𝑖𝑥𝑎𝑡𝑖𝑜𝑛 < 𝑚𝑠 Saccade 𝑣 𝑔𝑎𝑧𝑒 > ◦ / 𝑠 𝑚𝑠 < Δ 𝑠𝑎𝑐𝑐𝑎𝑑𝑒 < 𝑚𝑠 In the beginning of the first phase, the teacher enters the class-room, stays in the classroom for a while, and then leaves for ≈ “Understanding how computers think” . During thefirst phase, the teacher asks five simple questions to interact withthe students. Some of the peer-learners raise their hands and answerthe questions. In the second phase, the teacher explains two termsto the students, namely, the terms “loop” and “sequence” . Theseterms are also shown on the display. Then, the teacher asks fourquestions about each term and the peer-learners raise their handsto answer the questions. In the third phase, the teacher gives thestudents two exercises to evaluate whether or not they understandthe terms correctly. For each exercise, the students have some timeto think. Then, the teacher provides the answers for each exercise,and the peer-learners vote for the correct answer by raising theirhands. In the last phase, the teacher stands in the middle of theclassroom to summarize the lecture. No questions are asked in thisphase; therefore, none of the peer-learners raise their hands.Our study is in between-subjects design. The participants arelocated either in the front or back region of the virtual classroom.The participants that sit in the front of the virtual classroom haveone row in front of them, whereas the participants that sit in theback have three rows in front of them. The visualization stylesof the avatars have two levels as well, in particular cartoon andrealistic. Lastly, the hand-raising percentages, which are intended toshow the performance levels of the virtual peer-learners, have fourdifferent levels, including 20%, 35%, 65%, and 80%. Combining all,we have a 2 × × Each experimental session took ≈
45 minutes including preparationtime. We conducted the experiments in groups of ten participantsby assigning each participant randomly to one of the sixteen con-ditions. Before the data assessment took place at the participatingschools, students were informed that they could drop out of thestudy at any time without consequences. After a brief introduc-tion to the experiment and the data collection process, participantshad the opportunity to acclimate with the hardware and the VRenvironment.The experiment started with the eye tracker calibration. Aftercalibration success, the experimenters pressed the “Enter” buttonto start the actual experiment and data collection process, whereinparticipants experienced the immersive virtual environment andthe lecture. The experiments were supposed to be carried out in one session without breaks, mimicking thus a real classroom teachingsession, lasting about 15 minutes. At the end of the experiment, theVR application displayed a message telling the participants to takeoff their HMDs. Lastly, participants filled out questionnaires abouttheir experienced presence and perceived realism.
For this work, our main focus was eye-gaze, head-pose, and pupilrelated activities of the participants as these are considered to berich information sources, especially in VR. Fixations are the periodsduring which eyes are stationary within the head while fixated onan area of interest. Saccades, on the other hand, are the high-speedballistic eye movements that shift eye-gaze from one fixation toanother.Using fixations, saccades, and pupil diameters, plenty of eyemovement features are extracted. In this study, we extracted thenumber of fixations, fixation durations, saccade durations, saccadeamplitudes, and normalized pupil diameters to analyze differentconditions of the experiment. In the eye tracking literature, longerfixation durations correspond to engaging more with the objector increased cognitive process [30]. Fixation durations are mainlyrelated to cognition and attention; however, it is argued that theyare affected by the procedures that lead to learning and it is reportedthat fixation durations can be used to understand learning processesas well [43]. For instance, [15] has studied fixation patterns duringlearning in simulation- and microcomputer-based laboratory andfound that simulation group had longer fixation duration, whichmeans more attention and deeper cognitive processing. In additionto the fixations, longer saccade durations correspond to less efficientscanning or searching [19], whereas longer saccade amplitudesmean that attention is drawn from a distance [20]. Furthermore,a larger pupil diameter is related to higher cognitive load [8]. Inaddition, while being task dependent, [16] has indicated that pupildiameter measurements in high task load correlate with individual’sperformance. However, as pupil diameter values are also affectedby the illumination, a controlled environment is needed to assessit. In our VR setup, the illumination is controlled across differentconditions. Besides, a general overview of considering eye trackingas a tool to enhance learning with graphics is provided in [41].Additionally, the self-reported presence and realism were as-sessed by questionnaires. The items in the questionnaires werebased on the conceptualizations of [54] and [37] which were devel-oped particularly to assess students’ perception of the VR classroomsituation. The experienced presence and perceived realism wereassessed via using a 4-point Likert scales ranging from 1 (“do notagree at all”) to 4 (“completely agree”) with nine (e.g., “I felt likeI was sitting in the virtual classroom.” or “I felt like the teacherin the virtual classroom really addressed me.”) and six items (e.g.,
HI ’21, May 8–13, 2021, Yokohama, Japan Gao and Bozkir, et al. (a) Mean fixation durations. (b) Mean saccade durations. (c) Mean saccade amplitudes.
Figure 3: Results for different sitting positions. Significant differences are highlighted with * and *** for 𝑝 < . and 𝑝 < . ,respectively. “What I experienced in the virtual classroom, could also happen in areal classroom.” or “The students in the virtual classroom behavedsimilarly to real classmates.”), respectively. As the raw eye tracking data collected from the VR device does notinclude fixations, saccades or similar eye movements, we first pre-processed the data to identify these events. Detecting different eyemovements in the VR setup is a challenging task and different fromthe traditional eye tracking experiments that include equipmentsuch as chin-rests, as participants have opportunity to move theirheads freely in VR. In the eye tracking literature, Velocity-ThresholdIdentification (I-VT) method is used to classify fixations based onvelocities [52]. In the VR context, [1] applied a similar method todetect eye movement events. We opted for a similar approach.Before applying the I-VT, we first applied linear interpolationfor the missing gaze vectors. After the interpolation, we identifiedthe fixations when the HMD was stationary. However, the iden-tification of saccades was not restricted by the HMD movement.The used velocity and duration thresholds for the HMD movementstates, fixations, and saccades are depicted in Table 1, where thevelocities and durations are given as 𝑣 and Δ , respectively. Unlikethe fixations and saccades, the pupil diameter values are reportedby the eye tracker. As raw pupil diameter values are affected byblinks and noisy sensor readings, we smoothed and normalized thepupil diameter readings using Savitzky-Golay filter [53] and divisivebaseline correction using a baseline duration of ≈ We developed three hypotheses, each corresponds to one designfactor. • Hypothesis-1 (H1) : We hypothesize that the different sit-ting positions of the participants yield different effects onthe eye movements. As the participants that sit in the frontare closer to the board, displays, and the teacher, we assumethat they can attend the virtual lecture more efficiently thanparticipants in the back and have less difficulty extracting information about the lecture. However, as they have a nar-rower field of view, particularly towards the frontal part ofthe classroom, they need to shift their attention more thanthe participants sitting in the back. • Hypothesis-2 (H2) : We hypothesize that different visualiza-tion styles of virtual avatars affect student visual behaviorsdifferently. More particularly, as students are familiar withrealistic styles in the conventional classrooms, we claim thatcompared to cartoon-styled visualization condition, they at-tend the scene shorter during fixations in the realistic-styledvisualization setting as cartoon-styled avatars are more at-tractive to the students. Therefore, students engage withthe environment more in the cartoon-styled visualizationcondition than in the realistic-styled condition. • Hypothesis-3 (H3) : We hypothesize that different hand-raising percentages of virtual peer-learners can distinctivelyaffect the behaviors of participants. Specifically, we antici-pate that when relatively higher percentages of hand-raisinglevels are provided, such as 65% or 80%, the participant’scognitive load will be higher due to the fact that many ofthe peer-learners attend the lecture with a high focus. Simi-larly, participants have more fixations in the classroom inthe higher hand-raising percentage conditions as a highernumber of hand-raising percentage creates an opportunityfor various attention and distraction points.
As we have three factors that form 16 different conditions, weapplied 3-way full-factorial analysis of variance (ANOVA) by settingthe level of significance to 𝛼 = .
05 with Tukey-Kramer post-hoctest. For the non-parametric factorial analysis, we used the AlignedRank Transform (ART) [61] before applying ANOVA procedures.
Different sitting positions have an impact on the mean fixation andsaccade durations, and mean saccade amplitudes. The mean fixationdurations of the front and back sitting participants are illustrated inFigure 3 (a). The participants that sit in the back have significantlylonger mean fixation durations ( 𝑀 = . 𝑚𝑠, 𝑆𝐷 = . 𝑚𝑠 ) than igital Transformations of Classrooms in Virtual Reality CHI ’21, May 8–13, 2021, Yokohama, Japan (a) Mean fixation durations. (b) Mean saccade durations. (c) Pupil diameters. Figure 4: Results for different avatar visualization styles. Significant differences are highlighted with * for 𝑝 < . . the participants that sit in the front ( 𝑀 = . 𝑚𝑠, 𝑆𝐷 = . 𝑚𝑠 ),with 𝐹 ( , ) = . 𝑝 = . 𝑀 = . 𝑚𝑠, 𝑆𝐷 = . 𝑚𝑠 ) than in the back condition( 𝑀 = . 𝑚𝑠, 𝑆𝐷 = . 𝑚𝑠 ), with 𝐹 ( , ) = . 𝑝 < . 𝑀 = . ◦ , 𝑆𝐷 = . ◦ ) than in the back condition( 𝑀 = . ◦ , 𝑆𝐷 = . ◦ ), with 𝐹 ( , ) = . 𝑝 < . Different avatar visualization styles affect the mean fixation andsaccade durations, and pupil diameters. The results are depicted inFigures 4 (a), (b), and (c), respectively. The mean fixation durationsare significantly longer in the cartoon-styled avatar condition ( 𝑀 = . 𝑚𝑠, 𝑆𝐷 = . 𝑚𝑠 ) than in the realistic-styled avatar condi-tion ( 𝑀 = . 𝑚𝑠, 𝑆𝐷 = . 𝑚𝑠 ), with 𝐹 ( , ) = . 𝑝 = . 𝑀 = . 𝑚𝑠, 𝑆𝐷 = . 𝑚𝑠 )than in the realistic-styled condition ( 𝑀 = . 𝑚𝑠, 𝑆𝐷 = . 𝑚𝑠 ),with 𝐹 ( , ) = . 𝑝 = . 𝑀 = . , 𝑆𝐷 = .
16) than in the cartoon-styled avatar condition( 𝑀 = . , 𝑆𝐷 = . 𝐹 ( , ) = . 𝑝 = . The hand-raising behaviors of virtual peer-learners have significantimpacts on the pupil diameters and number of fixations as depictedin Figures 5 (a) and (b), respectively. We found significant effectson normalized mean pupil diameter values with 𝐹 ( , ) = . 𝑝 = . 𝑀 = . , 𝑆𝐷 = .
16) is significantly larger than in the35% hand-raising condition ( 𝑀 = . , 𝑆𝐷 = . 𝐹 ( , ) = . 𝑝 < . 𝐹 ( , ) = . 𝑝 = .
03. More specifically, thereare notably more fixations in the 65% hand-raising condition ( 𝑀 = . , 𝑆𝐷 = .
07) than in the 80% hand-raising condition ( 𝑀 = . , 𝑆𝐷 = . 𝐹 ( , ) = . 𝑝 = . We did not find significant effects of different experimental condi-tions on the self-reported experienced presence and perceived real-ism. Overall, the self-reported experienced presence and perceivedrealism values are in the vicinity of highest values with ( 𝑀 = . 𝑆𝐷 = .
55) and ( 𝑀 = . 𝑆𝐷 = . The results show that there are significant differences in the eyemovement features between front and back sitting position condi-tions. Firstly, participants had longer fixations in the back sittingcondition. This indicates that they had more processing time thanthe participants sitting in the front, which can be related to difficultyextracting information, similar to the relationship between task dif-ficulty and mean fixation duration [47]. Secondly, the participantsthat sit in the front had longer saccade durations and amplitudes,which suggests that they needed to shift their attention more duringthe virtual lecture. While being located closer to the lecture content,longer saccade durations indicate that the participants sitting in thefront had less efficient scanning behavior [19] during the lecture.We assume that this was due to the narrower field of view. Theseresults support our H1 . When designing virtual classes, these re-sults should be taken into account, particularly when determiningwhere students should be located in the classroom, depending onthe context.Our results show consequential effects in the eye movement fea-tures in different avatar style conditions. As mean fixation durationsare longer in the cartoon-styled visualization condition, we assumeparticipants found the cartoon-styled avatars more attractive andattention-grabbing. Therefore, their fixation behaviors were longerduring the virtual lecture. On the contrary, the mean saccade dura-tions are longer in realistic-styled conditions as the fixation dura-tions are shorter, which is theoretically expected. Furthermore, the HI ’21, May 8–13, 2021, Yokohama, Japan Gao and Bozkir, et al. (a) Pupil diameters. (b) Number of fixations.
Figure 5: Results for different hand-raising percentages. Significant differences are highlighted with * and *** for 𝑝 < . and 𝑝 < . , respectively. pupil diameters of the participants in the realistic-styled conditionare larger, indicating that the cognitive load of these participantswas significantly higher during the lecture, which is suggested bythe previous work [8]. This is an indication that participants mayhave taken the lecture more seriously and in a more focused mannerwhen the visualization was realistic. These findings support our H2 .Rendering realistic-styled avatars may be computationally expen-sive depending on the configuration. Therefore, an optimal trade-offshould be decided, taking the behavioral results into account whiledesigning the virtual classrooms.Furthermore, we observe significant effects in attention towardsdifferent hand-raising based performance levels of the peer-learners.Particularly, the pupil diameters of the participants in the 80% con-dition are significantly larger than the pupil diameters of the partici-pants in the 35% condition. We interpret this to mean that when theperformance and attendance level of peer-learners was relativelyhigher, the participants’ cognitive load became higher, indicatingthat they might pay more attention to the lecture content. This par-tially supports our H3 . In addition, a greater number of fixationsare observed in the 65% condition than in the 80% condition. Weclaim that when almost all of the peer-learners participated in hand-raising behaviors during the lecture, participants acknowledgedthis information without significantly shifting their gaze. However,this claim requires further investigation. Manipulation of differenthand-raising conditions may affect student self-concept [57], whichshould be further studied as well.In our study, the interaction and perception in the immersive VRclassroom were assessed mainly by using eye-gaze and head-poseinformation. However, while the virtual teacher and peer-learnerstalk in the simulations, no response or interaction by means of audioor gestures was expected from the participants. Combining visualperceptions and interactions with such data may provide additionalinsights particularly for better interaction design in VR classrooms.A future iteration can also evolve into an everyday virtual classroomplatform where each virtual agent is actually connected to a realperson, similar to in platforms such as Mozilla Hubs. To this end,further design settings such as optimal seating arrangement (e.g.,U-shape, circle shape) in addition to the sitting positions should beinvestigated. Evaluation of similar configurations in online learning platforms such as Coursera , Udemy , or MOOCs could provideadditional implications for interaction modeling. Furthermore, gaze-based attention guidance can be considered for more interactiveVR classroom experience and it can be achieved by fine-grainedeye movement analysis focusing on short time windows insteadof complete experiments. While being out of the scope of thispaper, assessing learning outcomes and combining them with visualinteraction and scanpath behaviors from immersive VR classroomcould also offer insights for optimal VR classroom design. In this work, we evaluated three major design factors of immersiveVR classrooms, namely different participant locations in the virtualclassroom, different visualization styles of virtual peer-learners andteachers, including cartoon and realistic, and different hand-raisingbehaviors of peer-learners, particularly through the analysis ofeye tracking data. Our results indicate that participants located inthe back of the virtual classroom may have difficulty extractinginformation during the lecture. In addition, if the avatars in theclassroom are visualized in realistic styles, participants may attendthe lecture in a more focused manner instead of being distracted bythe visualization styles of the avatars. These findings offer valuableinsights about design decisions in the VR classroom environment.Few indicators were obtained from the evaluation of the differenthand-raising behaviors of peer-learners, providing a general under-standing of attention towards peer-learner performance. However,these indicators should be further investigated and remain a focusof future work.
ACKNOWLEDGMENTS
This research was partly supported by a grant to Richard Göllnerfunded by the Ministry of Science, Research and the Arts of thestate of Baden-Württemberg and the University of Tübingen as partof the Promotion Program of Junior Researchers. Lisa Hasenbein isa doctoral candidate and supported by the LEAD Graduate School& Research Network, which is funded by the Ministry of Science, igital Transformations of Classrooms in Virtual Reality CHI ’21, May 8–13, 2021, Yokohama, Japan Research and the Arts of the state of Baden-Württemberg withinthe framework of the sustainability funding for the projects of theExcellence Initiative II. Authors thank Stephan Soller, Sandra Hahn,and Sophie Fink from the Hochschule der Medien Stuttgart for theirwork and support related to the immersive virtual reality classroomused in this study.
REFERENCES [1] Ioannis Agtzidis, Mikhail Startsev, and Michael Dorr. 2019. 360-Degree VideoGaze Behaviour: A Ground-Truth Data Set and a Classification Algorithm forEye Movements. In
Proceedings of the 27th ACM International Conference onMultimedia (Nice, France). ACM, New York, NY, USA, 1007–1015. https://doi.org/10.1145/3343031.3350947[2] Wadee Alhalabi. 2016. Virtual reality systems enhance students’ achievementsin engineering education.
Behaviour & Information Technology
35 (07 2016), 1–7.https://doi.org/10.1080/0144929X.2016.1212931[3] Tobias Appel, Christian Scharinger, Peter Gerjets, and Enkelejda Kasneci. 2018.Cross-subject workload classification using pupil-related measures. In
Proceedingsof the 2018 ACM Symposium on Eye Tracking Research & Applications (Warsaw,Poland). ACM, New York, NY, USA, Article 4, 8 pages. https://doi.org/10.1145/3204493.3204531[4] Tobias Appel, Natalia Sevcenko, Franz Wortha, Katerina Tsarava, KorbinianMoeller, Manuel Ninaus, Enkelejda Kasneci, and Peter Gerjets. 2019. PredictingCognitive Load in an Emergency Simulation Based on Behavioral and Physiologi-cal Measures. In (Suzhou,China). ACM, New York, NY, USA, 154–163. https://doi.org/10.1145/3340555.3353735[5] Jeremy N. Bailenson, Eyal Aharoni, Andrew C. Beall, Rosanna E. Guadagno, Alek-sandar Dimov, and Jim Blascovich. 2004. Comparing behavioral and self-reportmeasures of embodied agents’ social presence in immersive virtual environ-ments. In
Proceedings of the 7th Annual International Workshop on Presence . TheInternational Society for Presence Research, Valencia, Spain, 216–223.[6] Jeremy N. Bailenson, Andrew C. Beall, and Jim Blascovich. 2002. Gaze and taskperformance in shared virtual environments.
The Journal of Visualization andComputer Animation
13, 5 (2002), 313–320. https://doi.org/10.1002/vis.297[7] Jeremy N. Bailenson, Nick Yee, Jim Blascovich, Andrew C. Beall, Nicole Lundblad,and Michael Jin. 2008. The Use of Immersive Virtual Reality in the LearningSciences: Digital Transformations of Teachers, Students, and Social Context.
Journal of the Learning Sciences
17 (2008), 102–141. https://doi.org/10.1080/10508400701793141[8] Jackson Beatty. 1982. Task-evoked pupillary responses, processing load, and thestructure of processing resources.
Psychological Bulletin
91, 2 (1982), 276–292.https://doi.org/10.1037/0033-2909.91.2.276[9] Friederike Blume, Richard Göllner, Korbinian Moeller, Thomas Dresler, Ann-Christine Ehlis, and Caterina Gawrilow. 2019. Do students learn better whenseated close to the teacher? A virtual classroom study considering individuallevels of inattention and hyperactivity-impulsivity.
Learning and Instruction
Contemporary Educational Psychology
62 (2020), 101894. https://doi.org/10.1016/j.cedpsych.2020.101894[11] Efe Bozkir, David Geisler, and Enkelejda Kasneci. 2019. Assessment of DriverAttention during a Safety Critical Situation in VR to Generate VR-Based Training.In
ACM Symposium on Applied Perception 2019 (Barcelona, Spain). ACM, NewYork, NY, USA, Article 23, 5 pages. https://doi.org/10.1145/3343036.3343138[12] Efe Bozkir, David Geisler, and Enkelejda Kasneci. 2019. Person Independent,Privacy Preserving, and Real Time Assessment of Cognitive Load using EyeTracking in a Virtual Reality Setup. In (Osaka, Japan). IEEE, New York, NY, USA, 1834–1837.https://doi.org/10.1109/VR.2019.8797758[13] Andrea Casu, Lucio Davide Spano, Fabio Sorrentino, and Riccardo Scateni. 2015.RiftArt: Bringing Masterpieces in the Classroom through Immersive VirtualReality. In
Smart Tools and Apps for Graphics - Eurographics Italian ChapterConference (Verona, Italy). The Eurographics Association, Geneva, Switzerland,77–84. https://doi.org/10.2312/stag.20151294[14] Kun Hung Cheng and Chin Chung Tsai. 2019. A case study of immersive virtualfield trips in an elementary classroom: Students’ learning experience and teacher-student interaction behaviors.
Computers & Education
140 (2019), 103600. https://doi.org/10.1016/j.compedu.2019.103600[15] Kuei-Pin Chien, Cheng-Yue Tsai, Hsiu-Ling Chen, Wen-Hua Chang, and SufenChen. 2015. Learning differences and eye fixation patterns in virtual and physicalscience laboratories.
Computers & Education
82 (2015), 191–201. https://doi.org/10.1016/j.compedu.2014.11.023 [16] Joseph T. Coyne, Cyrus Foroughi, and Ciara Sibley. 2017. Pupil Diameter andPerformance in a Supervisory Control Task: A Measure of Effort or IndividualDifferences?
Proceedings of the Human Factors and Ergonomics Society AnnualMeeting
61, 1 (2017), 865–869. https://doi.org/10.1177/1541931213601689[17] Unai Díaz-Orueta, Cristina García-López, Nerea Crespo-Eguílaz, Rocío Sánchez-Carpintero, Gema Climent, and Juan Narbona. 2014. AULA virtual reality testas an attention measure: Convergent validity with Conners’ Continuous Perfor-mance Test.
Child Neuropsychology
20 (2014), 328–342. https://doi.org/10.1080/09297049.2013.792332[18] Laura Freina and Michela Ott. 2015. A literature review on immersive virtualreality in education: state of the art and perspectives. In
Proceedings of the 11thInternational Scientific Conference eLearning and Software for Education (Bucharest,Romania). Carol I NDU Publishing House, Romania, 133–141. https://doi.org/10.12753/2066-026X-15-020[19] Joseph H. Goldberg and Xerxes P. Kotval. 1999. Computer interface evaluationusing eye movements: methods and constructs.
International Journal of IndustrialErgonomics
24, 6 (1999), 631–645. https://doi.org/10.1016/S0169-8141(98)00068-7[20] Joseph H. Goldberg, Mark J. Stimson, Marion Lewenstein, Neil Scott, and Anna M.Wichansky. 2002. Eye Tracking in Web Search Tasks: Design Implications. In
Proceedings of the 2002 Symposium on Eye Tracking Research & Applications (NewOrleans, LA, USA). ACM, New York, NY, USA, 51–58. https://doi.org/10.1145/507072.507082[21] Patricia Goldberg, Ömer Sümer, Kathleen Stürmer, Wolfgang Wagner, RichardGöllner, Peter Gerjets, Enkelejda Kasneci, and Ulrich Trautwein. 2019. Attentiveor Not? Toward a Machine Learning Approach to Assessing Students’ VisibleEngagement in Classroom Instruction.
Educational Psychology Review
31, 4 (2019),1–23. https://doi.org/10.1007/s10648-019-09514-z[22] Sandra Helsel. 1992. Virtual Reality and Education.
Educational Technology
32, 5(1992), 38–42.[23] Jan Herrington, Thomas C Reeves, and Ron Oliver. 2007. Immersive learningtechnologies: Realism and online authentic learning.
Journal of Computing inHigher Education
19, 1 (2007), 80–99. https://doi.org/10.1007/BF03033421[24] Christian Hirt, Marcel Eckard, and Andreas Kunz. 2020. Stress generation andnon-intrusive measurement in virtual environments using eye tracking.
Journalof Ambient Intelligence and Humanized Computing
11, 1 (2020), 1–13. https://doi.org/10.1007/s12652-020-01845-y[25] Kenneth Holmqvist, Marcus Nyström, Richard Andersson, Richard Dewhurst,Jarodzka Halszka, and Joost van de Weijer. 2011.
Eye Tracking : A ComprehensiveGuide to Methods and Measures . Oxford University Press, United Kingdom.[26] Kate S. Hone and Ghada R. El Said. 2016. Exploring the factors affecting MOOCretention: A survey study.
Computers & Education
98 (2016), 157–168. https://doi.org/10.1016/j.compedu.2016.03.016[27] Zhiming Hu, Sheng Li, Congyi Zhang, Kangrui Yi, Guoping Wang, and DineshManocha. 2020. DGaze: CNN-Based Gaze Prediction in Dynamic Scenes.
IEEETransactions on Visualization and Computer Graphics
26, 5 (2020), 1902–1911.https://doi.org/10.1109/TVCG.2020.2973473[28] Halszka Jarodzka, Kenneth Holmqvist, and Hans Gruber. 2017. Eye tracking inEducational Science: Theoretical frameworks and research agendas.
Journal ofEye Movement Research
10, 1 (2017). https://doi.org/10.16910/jemr.10.1.3[29] Dongsik Jo, Ki-Hong Kim, and Gerard Jounghyun Kim. 2016. Effects of Avatarand Background Representation Forms to Co-Presence in Mixed Reality (MR)Tele-Conference Systems. In
SIGGRAPH ASIA 2016 Virtual Reality Meets PhysicalReality: Modelling and Simulating Virtual Humans and Environments (Macau).ACM, New York, NY, USA, Article 12, 4 pages. https://doi.org/10.1145/2992138.2992146[30] Marcel A. Just and Patricia A. Carpenter. 1976. Eye fixations and cognitiveprocesses.
Cognitive Psychology
8, 4 (1976), 441–480. https://doi.org/10.1016/0010-0285(76)90015-3[31] Tuomas Kantonen, Charles Woodward, and Neil Katz. 2010. Mixed reality invirtual world teleconferencing. In (Waltham, MA, USA). IEEE, New York, NY, USA, 179–182. https://doi.org/10.1109/VR.2010.5444792[32] Muhammad Sikandar Lal Khan, Haibo Li, and Shafiq Ur Réhman. 2016. Tele-Immersion: Virtual Reality Based Collaboration. In
International Conference onHuman-Computer Interaction (Toronto, Canada). Springer, Cham, 352–357. https://doi.org/10.1007/978-3-319-40548-3_59[33] Richard Lamb and Elisabeth A. Etopio. 2020. Virtual Reality: a Tool for PreserviceScience Teachers to Put Theory into Practice.
Journal of Science Education andTechnology
29, 4 (2020), 573–585. https://doi.org/10.1007/s10956-020-09837-5[34] Gongjin Lan, Ziyun Luo, and Qi Hao. 2016. Development of a virtual reality tele-conference system using distributed depth sensors. In (Chengdu, China). IEEE,New York, NY, USA, 975–978. https://doi.org/10.1109/CompComm.2016.7924850[35] Yining Lang, Liang Wei, Fang Xu, Yibiao Zhao, and Lap-Fai Yu. 2018. SynthesizingPersonalized Training Programs for Improving Driving Habits via Virtual Reality.In (Reutlingen,Germany). IEEE, New York, NY, USA, 297–304. https://doi.org/10.1109/VR.2018.8448290
HI ’21, May 8–13, 2021, Yokohama, Japan Gao and Bozkir, et al. [36] Meng-Yun Liao, Ching-Ying Sung, Hao-Chuan Wang, and Wen-Chieh Lin. 2019.Virtual Classmates: Embodying Historical Learners’ Messages as Learning Com-panions in a VR Classroom through Comment Mapping. In (Osaka, Japan). IEEE, New York,NY, USA, 163–171. https://doi.org/10.1109/VR.2019.8797708[37] Matthew Lombard, Theresa Bolmarcich, and Lisa Weinstein. 2009. MeasuringPresence: The Temple Presence Inventory. In
Proceedings of the 12th Annual Inter-national Workshop on Presence . The International Society for Presence Research,Los Angeles, CA, USA, 1–15.[38] Aman Mangalmurti, William Kistler, Barrington Quarrie, Wendy Sharp, SusanPersky, and Philip Shaw. 2020. Using virtual reality to define the mechanisms link-ing symptoms with cognitive deficits in attention deficit hyperactivity disorder.
Scientific Reports
10 (12 2020). https://doi.org/10.1038/s41598-019-56936-4[39] Ronald B. Marks, Stanley D. Sibley, and J. Ben Arbaugh. 2005. A Structural Equa-tion Model of Predictors for Effective Online Learning.
Journal of ManagementEducation
29, 4 (2005), 531–563. https://doi.org/10.1177/1052562904271199[40] Sebastiaan Mathôt, Jasper Fabius, Elle Van Heusden, and Stefan Van der Stigchel.2018. Safe and sensible preprocessing and baseline correction of pupil-size data.
Behavior Research Methods
50, 1 (2018), 94–106. https://doi.org/10.3758/s13428-017-1007-2[41] Richard E. Mayer. 2010. Unique contributions of eye-tracking research to thestudy of learning with graphics.
Learning and Instruction
20, 2 (2010), 167–171.https://doi.org/10.1016/j.learninstruc.2009.02.012[42] Christian Moro, Zane Štromberga, Athanasios Raikos, and Allan Stirling. 2017.The effectiveness of virtual and augmented reality in health sciences and medicalanatomy.
Anatomical Sciences Education
10, 6 (2017), 549–559. https://doi.org/10.1002/ase.1696[43] Shivsevak Negi and Ritayan Mitra. 2020. Fixation duration and the learningprocess: an eye tracking study with subtitled videos.
Journal of Eye MovementResearch
13, 6 (2020). https://doi.org/10.16910/jemr.13.6.1[44] Pierre Nolin, Annie Stipanicic, Mylène Henry, Yves Lachapelle, Dany Lussier-Desrochers, Albert S. Rizzo, and Philippe Allain. 2016. ClinicaVR: Classroom-CPT: A virtual reality tool for assessing attention and inhibition in childrenand adolescents.
Computers in Human Behavior
59 (2016), 327–333. https://doi.org/10.1016/j.chb.2016.02.023[45] Elena Olmos-Raya, Janaina Ferreira-Cavalcanti, Manuel Contero, M ConcepciónCastellanos, Irene Alice Chicchi Giglioli, and Mariano Alcañiz. 2018. Mobilevirtual reality as an educational platform: A pilot study on the impact of im-mersion and positive emotion induction in the learning process.
EURASIAJournal of Mathematics, Science and Technology Education
14, 6 (2018), 2045–2057.https://doi.org/10.29333/ejmste/85874[46] Jason Orlosky, Yuta Itoh, Maud Ranchet, Kiyoshi Kiyokawa, John Morgan, andHannes Devos. 2017. Emulation of Physician Tasks in Eye-Tracked VirtualReality for Remote Diagnosis of Neurodegenerative Disease.
IEEE Transactionson Visualization and Computer Graphics
23, 4 (2017), 1302–1311.[47] Marc Pomplun, Tyler Garaas, and Marisa Carrasco. 2013. The effects of taskdifficulty on visual search strategy in virtual 3D displays.
Journal of vision
Instructional science
23, 5-6 (1995), 405–431. https://doi.org/10.1007/BF00896880[49] Natasha Anne Rappa, Susan Ledger, Timothy Teo, Kok Wai Wong, Brad Power,and Bruce Hilliard. 2019. The use of eye tracking technology to explore learningand performance within virtual reality and mixed reality settings: a scopingreview.
Interactive Learning Environments
0, 0 (2019), 1–13. https://doi.org/10.1080/10494820.2019.1702560[50] Ananda Bibek Ray and Suman Deb. 2016. Smartphone Based Virtual RealitySystems in Classroom Teaching — A Study on the Effects of Learning Outcome.In (Mumbai, India). IEEE, New York, NY, USA, 68–71. https://doi.org/10.1109/T4E.2016.022[51] Albert A. Rizzo, Todd Bowerly, J. Galen Buckwalter, Dean Klimchuk, RomanMitura, and Thomas D. Parsons. 2006. A Virtual Reality Scenario for All Seasons:The Virtual Classroom.
CNS Spectrums
11, 1 (2006), 35–44. https://doi.org/10.1017/S1092852900024196[52] Dario D. Salvucci and Joseph H. Goldberg. 2000. Identifying Fixations andSaccades in Eye-Tracking Protocols. In
Proceedings of the 2000 Symposium on EyeTracking Research & Applications (Palm Beach Gardens, FL, USA). ACM, NewYork, NY, USA, 71–78. https://doi.org/10.1145/355017.355028[53] Abraham Savitzky and Marcel J. E. Golay. 1964. Smoothing and Differentiationof Data by Simplified Least Squares Procedures.
Analytical Chemistry
36 (1964),1627–1639. https://doi.org/10.1021/ac60214a047[54] Thomas Schubert, Frank Friedmann, and Holger Regenbrecht. 2001. The Ex-perience of Presence: Factor Analytic Insights.
Presence
10, 3 (2001), 266–281.https://doi.org/10.1162/105474601300343603[55] Seung-hun Seo, Eunjoo Kim, Peter Mundy, Jiwoong Heo, and Kwanguk Kim. 2019.Joint Attention Virtual Classroom: A Preliminary Study.
Psychiatry Investigation
16 (2019), 292–299. https://doi.org/10.30773/pi.2019.02.08[56] Sharad Sharma, Ruth Agada, and Jeff Ruffin. 2013. Virtual reality classroom as anconstructivist approach. In (Jacksonville,FL, USA). IEEE, New York, NY, USA, 1–5. https://doi.org/10.1109/SECON.2013.6567441[57] Richard J. Shavelson, Judith J. Hubner, and George C. Stanton. 1976. Self-Concept:Validation of Construct Interpretations.
Review of Educational Research
46, 3(1976), 407–441. https://doi.org/10.3102/00346543046003407[58] Adalberto L. Simeone, Marco Speicher, Andreea Molnar, Adriana Wilde, andFlorian Daiber. 2019. LIVE: The Human Role in Learning in Immersive VirtualEnvironments. In
Symposium on Spatial User Interaction (New Orleans, LA, USA).ACM, New York, NY, USA, Article 5, 11 pages. https://doi.org/10.1145/3357251.3357590[59] Ömer Sümer, Patricia Goldberg, Kathleen Stürmer, Tina Seidel, Peter Gerjets,Ulrich Trautwein, and Enkelejda Kasneci. 2018. Teachers’ Perception in theClassroom. In
Proceedings of the IEEE Conference on Computer Vision and PatternRecognition (CVPR) Workshops (Salt Lake City, UT, USA). IEEE, New York, NY,USA, 2315–2324.[60] David Weintrop, Elham Beheshti, Michael Horn, Orton Kai, Kemi Jona, LauraTrouille, and Uri Wilensky. 2016. Defining Computational Thinking for Mathe-matics and Science Classrooms.
Journal of Science Education and Technology
Proceedings of the SIGCHI Conference on Human Factors inComputing Systems (Vancouver, Canada). ACM, New York, NY, USA, 143–146.https://doi.org/10.1145/1978942.1978963[62] Christine Youngblut. 1998.