Importance of Instruction for Pedestrian-Automated Driving Vehicle Interaction with an External Human Machine Interface: Effects on Pedestrians' Situation Awareness, Trust, Perceived Risks and Decision Making
IImportance of Instruction for Pedestrian-automated Driving CarInteraction with an External Human Machine Interface
Hailong Liu , Takatsugu Hirayama , Masaya Watanabe Abstract — An external human machine interface (eHMI) canbe viewed as an explicit communication method for providingdriving intentions of an automated driving vehicle (AV) topedestrians. However, the eHMI may not guarantee that thepedestrians will fully recognize the intention of the AV. In thispaper, we proposed that the instruction of the eHMI’s rationalecan help pedestrians correctly understand the driving intentionsand predict the behavior of the AV, and thus their subjectivefeelings (i.e., dangerous feeling, trust in the AV, and feelingof relief) and decision-making are also improved. The resultsof an interaction experiment in a road-crossing scene indicatethat the participants were more difficult to be aware of thesituation when they encountered an AV w/o eHMI compared towhen they encountered a manual driving vehicle (MV); further,the participants’ subjective feelings and hesitation in decision-making also deteriorated significantly. When the eHMI wasused in the AV, the situational awareness, subjective feelingsand decision-making of the participants regarding the AV w/eHMI were improved. After the instruction, it was easier forthe participants to understand the driving intention and predictdriving behavior of the AV w/ eHMI. Further, the subjectivefeelings and the hesitation related to decision-making wereimproved and reached the same standards as that for the MV.
I. INTRODUCTIONAutomated driving vehicles (AV) are expected to bewidely used in the near future [1], [2]. Under this assumption,the traffic scenario is expected to contain a mixture of AVsand pedestrians such as shared spaces, intersections withno traffic lights, narrow roads, and parking lots. However,compared to manual driving vehicles (MV), AVs cannot yetconvey information to pedestrians in interactions, especiallyclose encounters.In the interaction between MVs and pedestrians, the latteroften understand the intention of the MV through implicitinformation such as the speed, acceleration, and steeringangular velocity of the vehicle, and the direction of thewheels [2]. Further, some explicit information can helppedestrians understand the intention of the driver such asthe head nod and hand gestures from the driver [3], [4].In particular, communication through explicit informationis very important for close interaction scenes. A typicalexample is when a pedestrian encounters an MV on a narrowroad without traffic signals or in a parking lot; in this case, Hailong Liu is with Graduate School of Informatics, NagoyaUniversity, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601, JAPAN. [email protected] Takatsugu Hirayama is with Institutes of Innovation for Future Soci-ety, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601,JAPAN. [email protected] Masaya Watanabe is with Vehicle Development Center, Toy-ota Motor Corporation, Toyota-cho, Toyota, Aichi, 471-8572, JAPAN masaya [email protected] the driver can clearly convey their intention and quicklyreach a consensus with the pedestrians via eye contact, handgestures, and verbal communication.However, for level 3–5 AVs driven by the automatedsystem, the driver does not participate in the driving task [5].Therefore, the AV may disregard useful explicit commu-nication in its interaction with pedestrians. This makes itdifficult for pedestrians to understand the intentions of theAV quickly and clearly [6], [7], particularly in complex urbanenvironments where informal communication strategies suchas waving hands are frequently used today [1]. In fact, thislimitation can cause potential issues, such as safety hazards,inefficiency, and poor pro-sociality [8]. Thus, understandinghow the AV communicates with pedestrians has become apressing concern.To solve this issue, a novel communication approach usingan external human machine interface (eHMI) can be viewedas one of the solutions [9]. In particular, various studies haveevaluated the efficacy of eHMIs for presenting the intentionof an AV to pedestrians using light bars, icons, and text [10],[11]. Although these studies advocate good eHMI designs,these eHMIs cannot guarantee that the pedestrians will fullyrecognize the intention of the AV, fully understand therationale behind the AV’s intention and perceive limitationsof AV’s functional abilities, especially those who do not haveconsiderable experience with eHMIs.For example, even if the pedestrians clearly recognizethat the AV has yielded the right of way via the message“you go first” displayed on the eHMI, it may be difficultfor the pedestrians to understand under what circumstancesthe AV will display this information, especially when thesensor range is unknown. Furthermore, the pedestrians maybe unsure about the time available for them to cross the roadwhen the message is displayed because it may be difficultfor the pedestrians to predict when the vehicle will depart.To solve the above problem of the eHMI, one solution isto increase the number of interactions between pedestriansand AVs equipped with eHMI to ensure that they learn theintention of the AV through the information on eHMI [12].However, this method requires pedestrians to pay the timecost of learning. Further, pedestrians may remain in dangerbecause of insufficiently understanding the intention of theAV during the learning process. We consider that the funda-mental approach to solve this problem is to help pedestriansestablish the correct understanding quickly using a mentalmodel [13] of the AV with eHMI. In this study, we propose asimpler approach that instructs pedestrians to understand theexecution conditions, mechanisms, and function limitations a r X i v : . [ c s . H C ] F e b ecision making +- Set the target riskExpectationRisk evaluation
Perceived riskTarget risk
BehaviorPerception Projection
Situation model
Comprehension
Prior informationfor comprehension Prior informationfor projection
Situation awareness
Hazard perception
State of the environment
Instruction
Q1 Q2 Q3,Q4,Q5 Q6
Mental model *The Q1-Q6 are six questions for the subjective evaluations.
Fig. 1: The proposed cognitive-decision-behavior model of a pedestrian based on the model in [7].of the AV and the eHMI. Further, this approach helpspedestrians gain situational awareness for interacting with theAV and to understand the intention and predict the behaviorof the AV correctly. Thus, this approach can help pedestriansimprove the subjective feelings regarding AVs, which canincrease the social acceptance of AVs.Many related studies have reported that instructions for anin-vehicle human machine interface (iHMI) of the AV haveenabled drivers improve their understanding of using the AV,interactive performance with the AV, and trust in the AV [14],[15], [16]. However, owing to the growing popularity of AVs,both drivers and pedestrians interact with AVs, and therefore,instructing pedestrians to understand the intention of the AVcorrectly when interacting with it is an urgent issue that isyet to be studied widely.In this paper, we propose using an instruction to helppedestrians establish a correct mental model of eHMI onAV. We investigate changes in situational awareness and var-ious subjective evaluations of pedestrians when interactingwith vehicles under various scenarios based on a cognitive-decision-behavior model of pedestrians. The experimentalresults not only verify the effectiveness of eHMI for the inter-action of pedestrians and AVs, but also that the instruction ofthe rationale of eHMI can improve the situational awareness,subjective feelings and decision-making of the pedestrianstowards AVs.II. PURPOSE AND HYPOTHESISIn this study, we aim to analyze the influence of the in-structing the rationale of eHMIs on the situational awareness,subjective feelings and decision-making of pedestrians whenthey encounter an AV. We verify the following hypothesis:H: If pedestrians correctly understand the rationale ofeHMI through the instruction, then the eHMI on AVcan help improve the situational awareness, subjectivefeelings and decision-making of pedestrians during theinteraction.To verify this hypothesis, we further consider the mecha-nism of the mental model in the cognitive-decision-behaviorprocess of pedestrians based on a model proposed in ourprevious studies [6], [7], which is shown in Fig. 1. Thiscognitive-decision-behavior model includes three parts: sit-uational awareness, risk evaluation based on hazard percep-tion, and decision making based on risk homeostasis. The situational awareness of a pedestrian includes their ability toperceive objects in the surrounding environment (i.e., percep-tion), understand its state and intention (i.e., comprehension),and predict its state in the future (i.e., projection). Then, thepedestrian perceives hazards based on their prediction resultsand evaluates the magnitude of the subjective risk. Next, thepedestrian decides the behavior by comparing the subjectiverisk with the acceptable risk level. This risk compensationprocess can be explained by the risk homeostasis theory [17].The mental model is an internal representation that con-tains meaningful declarative and procedural knowledge gen-erated from long-term experiences [18]. It is spontaneouslygenerated by recognizing and interpreting the system byrepeatedly using it [19]. Further, the situation model isthe current instantiation of the mental model, that is, therelationship between the mental model and the situationmodels is a two-layer structure [20]. The situation model canbe viewed as a prediction model constructed and supportedby an underlying mental model [13], [21]. We consider thatthe mental model provides the situational model with someprior information and knowledge to guide it to performcomprehension and projection for a given situation.Based on the above arguments, we propose forming andcalibrating a mental model correctly by instructing pedes-trians with relevant knowledge about the eHMI for inter-acting with the AV. We design an experiment to verify theeffectiveness of the eHMI for the pedestrian-AV interaction;further, we demonstrate the significance of the instruction inimproving the situational awareness, subjective feelings anddecision-making of pedestrians during the interaction.III. WIZARD OF OZ EXPERIMENTA pedestrian–car interaction experiment was conductedin the B2F parking lot of the Toyota Stadium, Toyota-shi,Aichi, Japan. The safety of the experiment site (i.e., theB2F parking lot) was ensured by blocking access to thepublic. This experiment was approved by the ethics reviewcommittee of the Institute of Innovation for Future Society,Nagoya University.
A. Experimental car
The experimental car is shown in Fig. 3. In general, cars inJapan are right-hand drive vehicles. Therefore, to simulate anunmanned driving car, a left-hand drive car (TOYOTA Prius)was used as the experimental vehicle, and a real driver who xperimental car
Fig. 2: Experimental scene: simulating a pedestrian encoun-ters with a car in a parking lot.
Mirror film for hid ing the real driver Dummy steering wheel Automated driving mode makereHMI
Fig. 3: Experimental car: a left-hand driven car is used tosimulate a right-hand driven AV.is an expert driver was hidden behind on the left seat byusing a mirror film. In addition, a dummy steering wheelwas installed on the right side to make the participantsbelieve that this is a right-hand drive car. Considering theexperimental scene of the parking lot and the safety ofparticipants, the maximum speed of the car was restrictedto 8 km/h and the average speed was about 4 km/h.
B. Design of eHMI
The eHMI device is set behind the right side of thewindshield. After the AV stops, a message “ 動 きません ”(“ UGOKIMASEN ”) is displayed immediately. This messageimplies that the car does not move now. The eHMI blinkstwice at one second intervals after the pedestrian has crossedthe road. This indicates that the car will move. After blinking,the eHMI is turned off and the car departs.There are two reasons for considering the above settings:(1) We do not want the car to command pedestrians becauseof liability. “
UGOKIMASEN ” only indicates the currentstate and driving intention of the car to the pedestrians.It does not inform the pedestrians on what to do such as“You go first.” The pedestrians need to decide their walkingbehavior by themselves based on the state of the vehicleand the message on the eHMI. (2) This message can helpus compare the effectiveness of the instruction because itcan help pedestrians gain a vague understanding of the AV’sintention in the specific context of Japanese. For example,the pedestrians may think that the AV is asking for helpbecause the car has broken down when “
UGOKIMASEN ” isdisplayed after the car halts. Moreover, the timing and actionconditions of the message blinking are also unclear for thepedestrians. This is an important point that the pedestriansshould be instructed about.
C. Pedestrians
32 participants participated in this experiment as pedestri-ans. They were within the age range of 23–68 years (mean:49.12, standard deviation: 11.13). Further, 17 participantswere females and the remaining 15 were males. Before theexperiment, the following information was provided to theparticipants: 1) Imagine you drive to the shopping center. You parkyour car in the underground parking lot and want togo to the elevator.2) You need to cross a road to get to the elevator. Pleasewalk at a normal speed during this process (see thedotted line in Fig. 2).3) When you cross the road, a manual driving car (MV) oran automated driving car (AV) will arrive. You shouldbe mindful of the car when crossing the road.4) The AV is a driver-less car. It is equipped with ad-vanced built-in sensors that can detect pedestrians andthe surrounding environment such as roads and the stopline. (False information)
5) For both the MV and AV, the pylons indicate a stopline (see Fig. 2). The car will stop before the stop line.After stopping, the car will decide whether to departfrom the surrounding situation based on whether thereis a pedestrian.It is important to note that information about the eHMI wasnot provided to the participants in this introduction beforethe experiment.
D. Interaction scenarios
Four scenarios were designed to allow the pedestrians tointeract with the car. Each scenario was executed five timesfor each pedestrian. In total, each participant encountered 20vehicle trials.
MV:
In this scenario, a pedestrian encounters an MV.A dummy driver sits on the right seat and imitates a realdriver holding the steering wheel to control the car. Whenthe pedestrian encounters the MV, the pedestrian can see thedummy driver driving the car. Further, the dummy driveruses gestures to indicate to the pedestrians to cross the roadafter stopping the car.
AV w/o eHMI:
This is a scenario wherein the pedestrianencounters an AV. There is no eHMI device on the car;however, a striking marker is present on the hood of thecar, which indicates that the car is in the automated drivingmode (see Fig. 3). Further, there is no dummy driver sittingon the right seat. When the car encounters the participant,the car stops before the stop line (two pylons). At this time, nstruction by document
Instruction by demonstration
Fig. 4: Scenes of instructing the rationale of eHMI on AV to the participant.the participant needs to decide the timing of crossing theroad and their walking behavior. Then, the car departs afterthe participant completely crosses the road.
AV w/ eHMI:
This is a scenario wherein the pedestrianencounters an AV with an eHMI. The eHMI device isinstalled behind the right side of the windshield. The carstops in front of the stop line, and then, the eHMI shows themessage “
UGOKIMASEN ” to let the participant know thatthe car will not move. the participant needs to decide thetiming for crossing the road and their walking behavior. Afterthe participant completely crosses the road, the message onthe eHMI blinks twice. Then, the eHMI is turned off and thecar departs.
AV w/ eHMI after instruction:
This scenario is the sameas the one before, i.e., AV w/ eHMI. However, the differenceis that before the participant interacts with the vehicle, thepedestrian is instructed to create a correct mental model ofthe AV w/ eHMI. Thus, the following information about theusage conditions and meaning of the eHMI are provided tothe participant:1) When the AV detects the pedestrian, the message“
UGOKIMASEN ” will be displayed on the eHMI afterthe car stops. This indicates that the AV will not move.2) The eHMI will blink twice in one second after thepedestrian crosses the road. This indicates that the AVwill move again and depart.3) After blinking, the eHMI is turned off. Then, the AVwill depart.This instructional scene is shown in Fig. 4. To elimi-nate the vague understanding of the participant regardingthe information on the eHMI, a document containing theabove information was used to explain the meaning of theinformation on the eHMI. Further, a demonstration was usedto explain to the participants when the eHMI would beturned on and when the eHMI would blink. The participantsstood on the side of the road to watch the demonstrator’sexplanation.After the instruction, we answered questions raised bythe participants about the content of the instruction. Weconducted the experiments with the fourth scenario afterconfirming that the participants understood the content ofthe instruction. We believe that the instruction helped thembuild a correct mental model of the AV w/ eHMI.
E. Subjective evaluations for four scenarios
After each trial, the participants were asked to use a five-grade evaluation scale—“Strongly Agree,” “Agree,” “Unde-cided,” “Disagree,” and “Strongly Disagree”—to respond tothe following questions:Q1: Was it easy to understand the driving intention of thecar?Q2: Was it easy to predict the behavior of the car?Q3: Did you feel the behavior of the car was dangerous?Q4: Did you trust the car when you crossed the road?Q5: Did you feel a sense of relief when you crossed theroad?Q6: Did you hesitate when you crossed the road?As shown in Fig. 1, Q1 and Q2 are used to evaluate thecomprehension and projection steps in the situation model;Q3, Q4, and Q5 are used for risk evaluation; and Q6 is usedfor evaluating the speed of decision making.IV. RESULTS AND DISCUSSIONSTo verify that instructing the pedestrians about the eHMIhelps them correctly understand the driving intentions andpredict the behavior of the AV to improve their subjectivefeelings about the AV (i.e., the dangerous feeling, trust inthe AV, and relieved feeling when they interact with theAV) and the decision-making, we analyzed the subjectiveevaluations of the participants for each scenario using sixquestions. The subjective evaluations for each scenario werecollected after each trial. For each question, the subjectiveevaluations of the participants under different scenarios werecounted separately. Figure 5 shows the proportions of thefive-grade evaluation for the six questions for each scenario.The subjective evaluations of the pedestrians encounteringthe MV are used as the baseline. We compare the evaluationresults from the other three scenarios with the baselineto discuss the problem of pedestrian-AV interaction, andwe illustrate the effectiveness of our proposed solution,i.e., using eHMI with instruction. Multiple differences wereobserved in the results of the six questions for the fourscenarios when compared using the Dwass–Steel–Critchlow–Fligner (DSCF) method [22].
A. Evaluation of situational awareness based on Q1 and Q2
According to the cognitive-decision-behavior modelshown in Fig. 1. Q1 and Q2 are used to evaluate the trongly Agree Agree Undecided Disagree Strongly DisagreeQ1: Was it easy to understand the driving intention of the car? Q2: Was it easy to predict the behavior of the car? Q3: Did you feel the behavior of the car was dangerous? Q4: Did you trust the car when you crossed the road?
Q5: Did you feel a sense of relief when you crossed the road? Q6: Did you hesitate when you crossed the road?
MV AV w/oeHMI AV w/eHMI AV w/eHMI afterinstruction
MV AV w/oeHMI AV w/eHMI AV w/eHMI afterinstruction
MV AV w/oeHMI AV w/eHMI AV w/eHMI afterinstruction
MV AV w/oeHMI AV w/eHMI AV w/eHMI afterinstruction
MV AV w/oeHMI AV w/eHMI AV w/eHMI afterinstruction
MV AV w/oeHMI AV w/eHMI AV w/eHMI afterinstruction
Kruskal–Wallis one-way ANOVAH=89.6, p≈0.0 Kruskal–Wallis one-way ANOVAH=87.4, p≈0.0 Kruskal–Wallis one-way ANOVAH=43.3, p≈0.0 Kruskal–Wallis one-way ANOVAH=74.7, p≈0.0 Kruskal–Wallis one-way ANOVAH=79.5, p≈0.0 Kruskal–Wallis one-way ANOVAH=71.9, p≈0.0
Multiple Comparison by Dwass-Steel-Critchlow-Fligner method (DSCF) *:p<0.05, **:p<0.01, ***:p<0.001****** *** * ****** *** ** ****** *** ****** ***** ****** ***** * ****** ***
Fig. 5: Results of subjective evaluations for four scenarios. (The ratios are rounded to 1 decimal places)comprehension and projection steps in the situation model ofthe participants. These results present the degree of difficultyfor the participants to understand the driving intention andpredict the behavior of the car. Because the situation modelis affected by the mental model, the results of Q1 and Q2can demonstrate the effect of the instruction on the mentalmodel.As the baseline, the results of Q1 in Fig. 5 show thatthe participants in 59.4% of the trials agreed strongly thatthe driving intention of the MV was easy to understand.Only 3.1% of trials were evaluated by Q1 as “Disagree”for the MV. However, the results of using the DSCF showedthat there were significant differences between the subjectiveevaluations of participants encountering the AV w/o eHMIand the MV using Q1 (p < < < < < < B. Result of risk evaluation by Q3, Q4 and Q5
Q3, Q4, and Q5 are related to the subjective risks ofparticipants. These subjective risks are perceived and evalu-ated based on the results of situational awareness, as shownin Fig. 1. The dangerous feeling in the behavior of thecar was evaluated by Q3. It is assumed to be based onthe results of Q1 and Q2, that is, the participants wouldevaluate the danger of the car to them based on the perceiveddriving intention and predicted driving behavior. Further, theparticipants would adjust their trust in the car by comparingthe result of the situational awareness with their experience.his result was evaluated by Q4. Further, Q5 evaluated thedegrees of the feeling relief when crossing the road. It isassumed to be based on the results of Q3 and Q4. Pedestriansmay cross the road relievedly when they do not feel that thecar is dangerous to them and they trust the car.The results of Q3 in Fig. 5 indicate that 26.3% and51.9% of trials encountering the MV disagreed and stronglydisagreed with the car being dangerous. Similarly, for Q5,26.9% and 64.4% of trials encountering the MV agreedand strongly agreed that the participants were relieved whencrossing the road. However, although 15.7% (8.8% + 6.9%)trials of the MV were evaluated as “Agree” and “StronglyAgree” in totality for Q3, only 2.5% (2.5% + 0.0%) trialsof the MV were evaluated as not relieved by Q5. Thisis attributed to the participants’ trust in the driver; theparticipants may trust that even if they felt dangerous towardsthe MV, the driver of MV would not cause harm to them.This was illustrated by the results of Q4 which show thatthe 26.3% and 60.0% of trials of the MV were evaluatedas “Agree” and “Strongly Agree”, and only 2.5% and 0.6%of trials of it were evaluated as “Disagree” and “StronglyDisagree”.The results of Q3, Q4, and Q5 were significantly differ-ent when the participants encountered the AV w/o eHMIcompared to those when the participants encountered theMV (p < < < > < < > > < > C. Result of hesitation for decision making by Q6
Q6 was used to evaluate the hesitation of the participantsin decision making when encountering the car. That is, thisresult can be used to evaluate the decision-making speed ofthe participants.The results of Q6 shown in Fig. 5 indicate that the par-ticipants disagreed and strongly disagreed that they hesitatedin a total of 67.6% (33.8% + 33.8%) of the trials when theyencountered the MV. Further, in 14.4% and 4.4% of thesetrials, the participants agreed and strongly agreed that theyhesitated in making a decision to cross the road.When the participants encountered the AV w/o eHMI, theyhesitated in more trials than when they encountered the MV(p < > < ONCLUSION
In this paper, an interaction experiment was designed toallow participants to encounter an MV, AV w/o eHMI, AVw/ eHMI, and AV w/ eHMI after instruction under a roadcrossing scenario.We found an obvious issue when the participants encoun-tered the AV w/o eHMI. The participants felt that it becamemore difficult to understand the driving intention and predictthe driving behavior when they encountered the AV w/oeHMI after they had habituated to interacting with the MV.Further, their subjective feelings about the AV w/o eHMIdeteriorated significantly, and they became hesitant whenmaking a decision to cross the road.When the eHMI was used in the AV, it helped the partic-ipants understand the driving intention of AV and predict itsdriving behavior; further, their subjective feelings about theAV w/ eHMI and their hesitation towards decision-makingimproved. However, they still did not exceed the subjectiveevaluations of the MV.After instructing the rationale of eHMI on AV to thepedestrians, it could help them correctly understand thedriving intentions and predict the behavior of the AV, whichcould help improve their subjective feelings (i.e., dangerousfeeling, trust in the AV, and feeling of relief) and decision-making. Note that, those subjective evaluations reached thesame standards as that for the MV.In future works, we will further analyze the influence ofinstruction on the behaviors of participants. We will refinethe instruction and analyze the influence on pedestrians, suchas providing instructions for cognition and risk evaluation.Finally, we hope to realize the standardization of the eHMIso that we can design a more accurate method of instruction.ACKNOWLEDGMENTThis work was supported by JSPS KAKENHI GrantNumbers JP20K19846 and JP19K12080. R
EFERENCES[1] L. Vissers, S. v. d. Kint, I. v. Schagen, and M. Hagenzieker, “Safeinteraction between cyclists, pedestrians and automated vehicles; whatdo we know and what do we need to know?,” tech. rep., The Hague,Dec 2016. R-2016-16.[2] A. Rasouli and J. K. Tsotsos, “Autonomous vehicles that interact withpedestrians: A survey of theory and practice,”
IEEE Transactions onIntelligent Transportation Systems , vol. 21, pp. 900–918, March 2020.[3] M. Sucha, D. Dostal, and R. Risser, “Pedestrian-driver communicationand decision strategies at marked crossings,”
Accident Analysis &Prevention , vol. 102, pp. 41–50, 2017.[4] B. F¨arber, “Communication and communication problems betweenautonomous vehicles and human drivers,” in
Autonomous driving ,pp. 125–144, Springer, 2016.[5] SAE Technical Standards Board, “J3016:taxonomy and definitions forterms related to driving automation systems for on-road motor vehi-cles,” pp. 1–30, SAE International, 2016.[6] H. Liu, T. Hirayama, L. Y. Morales, and H. Murase, “What gaze behav-ior do pedestrians take in interactions when they do not understand theintention of an automated vehicle?,” arXiv preprint arXiv:2001.01340 ,2020.[7] H. Liu, T. Hirayama, L. Y. M. Saiki, and H. Murase, “What timingfor an automated vehicle to make pedestrians understand its drivingintentions for improving their perception of safety?,” in , pp. 462–467, 2020.[8] C. D. Batson and A. A. Powell, “Altruism and prosocial behavior,”
Handbook of psychology , pp. 463–484, 2003.[9] A. Schieben, M. Wilbrink, C. Kettwich, R. Madigan, T. Louw, andN. Merat, “Designing the interaction of automated vehicles with othertraffic participants: design considerations based on human needs andexpectations,”
Cognition, Technology & Work , vol. 21, no. 1, pp. 69–85, 2019.[10] M. Rettenmaier, M. Pietsch, J. Schmidtler, and K. Bengler, “Passingthrough the bottleneck-the potential of external human-machine inter-faces,” in , pp. 1687–1692, IEEE, 2019.[11] S. M. Faas and M. Baumann, “Yielding light signal evaluation forself-driving vehicle and pedestrian interaction,” in
Human SystemsEngineering and Design II (T. Ahram, W. Karwowski, S. Pickl,and R. Taiar, eds.), (Cham), pp. 189–194, Springer InternationalPublishing, 2020.[12] M. Hochman, Y. Parmet, and T. Oron-Gilad, “Pedestrians’ understand-ing of a fully autonomous vehicle’s intent to stop: A learning effectover time,”
Frontiers in Psychology , vol. 11, p. 3407, 2020.[13] M. R. Endsley, “Toward a theory of situation awareness in dynamicsystems,”
Human factors , vol. 37, no. 1, pp. 32–64, 1995.[14] S. Hergeth, L. Lorenz, and J. F. Krems, “Prior familiarization withtakeover requests affects drivers’ takeover performance and automationtrust,”
Human factors , vol. 59, no. 3, pp. 457–470, 2017.[15] Y. Forster, S. Hergeth, F. Naujoks, J. Krems, and A. Keinath, “User ed-ucation in automated driving: Owner’s manual and interactive tutorialsupport mental model formation and human-automation interaction,”
Information , vol. 10, no. 4, p. 143, 2019.[16] A. Edelmann, S. St¨umper, R. Kronstorfer, and T. Petzoldt, “Effects ofuser instruction on acceptance and trust in automated driving,” in , pp. 1–6, IEEE, 2020.[17] G. J. Wilde, “The theory of risk homeostasis: implications for safetyand health,”
Risk analysis , vol. 2, no. 4, pp. 209–225, 1982.[18] S. Al-Diban,
Mental Models , pp. 2200–2204. Boston, MA: SpringerUS, 2012.[19] N. Staggers and A. F. Norcio, “Mental models: concepts for human-computer interaction research,”
International Journal of Man-machinestudies , vol. 38, no. 4, pp. 587–605, 1993.[20] M. R. Endsley, “Situation models: An avenue to the modeling ofmental models,” in
Proceedings of the Human Factors and ErgonomicsSociety Annual Meeting , vol. 44, pp. 61–64, SAGE Publications SageCA: Los Angeles, CA, 2000.[21] R. H. Mogford, “Mental models and situation awareness in air trafficcontrol,”
The International Journal of Aviation Psychology , vol. 7,no. 4, pp. 331–341, 1997.[22] C. E. Douglas and F. A. Michael, “On distribution-free multiplecomparisons in the one-way analysis of variance,”