Trust-Based Route Planning for Automated Vehicles
Shili Sheng, Erfan Pakdamanian, Kyungtae Han, Ziran Wang, John Lenneman, Lu Feng
TTrust-Based Route Planning for Automated Vehicles
Shili Sheng
School of EngineeringUniversity of [email protected]
Erfan Pakdamanian
School of EngineeringUniversity of [email protected]
Kyungtae Han
Toyota InfoTech [email protected]
Ziran Wang
Toyota InfoTech [email protected]
John Lenneman
Toyota CollaborativeSafety Research [email protected]
Lu Feng
School of EngineeringUniversity of [email protected]
ABSTRACT
Several recent works consider the personalized route planningbased on user profiles, none of which accounts for human trust.We argue that human trust is an important factor to consider whenplanning routes for automated vehicles. This paper presents the firsttrust-based route planning approach for automated vehicles. Weformalize the human-vehicle interaction as a partially observableMarkov decision process (POMDP) and model trust as a partiallyobservable state variable of the POMDP, representing human’s hid-den mental state. We designed and conducted an online user studywith 100 participants on the Amazon Mechanical Turk platform tocollect data of users’ trust in automated vehicles. We build data-driven models of trust dynamics and takeover decisions, whichare incorporated in the POMDP framework. We compute optimalroutes for automated vehicles by solving optimal policies in thePOMDP planning. We evaluated the resulting routes via human sub-ject experiments with 22 participants on a driving simulator. Theexperimental results show that participants taking the trust-basedroute generally resulted in higher cumulative POMDP rewards andreported more positive responses in the after-driving survey thanthose taking the baseline trust-free route.
CCS CONCEPTS • Human-centered computing → Ubiquitous and mobile com-puting ; •
Computing methodologies → Planning and schedul-ing . KEYWORDS
Trust, Automated Vehicle, Route Planning
ACM Reference Format:
Shili Sheng, Erfan Pakdamanian, Kyungtae Han, Ziran Wang, John Lenne-man, and Lu Feng. 2021. Trust-Based Route Planning for Automated Vehicles.In
ACM/IEEE 12th International Conference on Cyber-Physical Systems (with
Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected].
ICCPS ’21, May 19–21, 2021, Nashville, TN, USA © 2021 Association for Computing Machinery.ACM ISBN 978-1-4503-8353-0/21/05...$15.00https://doi.org/10.1145/3450267.3450529
CPS-IoT Week 2021) (ICCPS ’21), May 19–21, 2021, Nashville, TN, USA.
ACM,New York, NY, USA, 10 pages. https://doi.org/10.1145/3450267.3450529
Recent years have witnessed significant advances in the develop-ment of automated vehicle, which have already been tested overmillions of miles on public roads [4]. However, fully autonomous ve-hicles that do not require human intervention are still decades awaydue to technology, infrastructure, and regulation limitations [18].The majority of automated vehicles available to the general pub-lic nowadays are Level 2 and Level 3 of automation [12], whichallow the driver to turn attention away from the primary task ofdriving; but the driver must still be prepared to take over controlof the vehicle when necessary. Human’s decision on whether ornot to rely on the automation is guided by trust. Prior studies havefound that distrust is a main barrier to adoption of automated vehi-cles [27]; in addition, users with lower trust levels take over controlof the vehicle more frequently [28]. On the other hand, overtrustin automation can lead to catastrophic outcomes (e.g., fatal Teslaautopilot crashes [3]). Therefore, in order to improve safety anduser experience, there is a need for taking into account human trustin the system design of automated vehicles.In this paper, we consider the design of route planning systemfor the navigation of automated vehicles. Existing route planningmethods (e.g., [7, 19, 26]) mostly focus on computing routes thatoptimize metrics such as distance, travel time, and fuel consumption.Several recent works (e.g., [9, 13, 41]) consider the personalizedroute recommendation based on user profiles (e.g., mobility options,frequently visited places). However, none of the existing routeplanning methods explicitly account for human trust. We arguethat human trust is an important factor to consider when planningroutes for automated vehicles. For example, if the driver has lowertrust in the automated vehicle’s capability for safely navigatingurban streets with pedestrians constantly crossing as opposed tofreeways, the driver may prefer a freeway despite longer distance.To the best of our knowledge, this paper presents the first workof trust-based route planning for automated vehicles. There are sev-eral challenges in developing this work. First, how to measure andmodel human trust in automation, which is a hidden mental stateinfluenced by many factors and changes over time [38]. Second,how to incorporate the trust model into the route planning whileaccounting for the human-vehicle interaction (e.g., takeover de-cisions). Finally, how to evaluate the proposed trust-based route a r X i v : . [ c s . H C ] M a r CCPS ’21, May 19–21, 2021, Nashville, TN, USA Shili Sheng, Erfan Pakdamanian, Kyungtae Han, Ziran Wang, John Lenneman, and Lu Feng planning approach. In the following, we provide an overview ofhow we address these challenges in this work.We follow the notion of trust in automation defined in [34],which views human trust as delegation of responsibility for actionsto the automation and willingness to accept risk (possible harm),while the decision to delegate is based on a subjective evaluationof the automation’s capability for a particular task. To concretizethe problem, we consider a motivating example where the auto-mated vehicle may encounter three types of typical road incidents(i.e., pedestrian, obstacle, and oncoming truck). Trust is thereforeaffected by human’s takeover decision and the vehicle’s capabilityof handling an incident. We adopt the commonly used method ofmeasuring the subjective belief of trust via user questionnaires.Specifically, we designed and conducted an online user study with100 participants on the Amazon Mechanical Turk platform. Weasked users to watch various driving videos recorded in the driver’sview and answer questions about their trust in the automated vehi-cle’s capability of safely handling the incident shown in the videoin a 7-point Likert scale, as well as whether they would like to takeover control of the vehicle imaging that they were the driver sit-ting inside the automated vehicle. We model the evolution of trustdynamics (i.e., how trust changes over time) as a linear Gaussiansystem using the data collected from the online user study. We alsobuild data-driven models to predict human’s takeover decisions.We formalize the human-vehicle interaction as a partially observ-able Markov decision process (POMDP), which is a general model-ing framework for planning under uncertainty [24]. We model trustas a partially observable state variable of the POMDP, representinghuman’s hidden mental state. In addition, there are three observ-able state variables representing the vehicle position, incident type,and the success/failure of the vehicle handling an incident. Theestimated trust dynamics model informs the probabilistic transitionfunction of the trust variable in the POMDP. There are two actions:human’s takeover decision and the vehicle’s route choice. Since thevehicle does not know about human’s actual takeover decision inadvance, it assumes that human follows the data-driven takeoverdecision models estimated using the online user study data. Thegoal of POMDP planning is to compute an optimal policy thatmakes route choices to maximize the expectation of the cumulativereward, with a reward function designed to promote better safetyand user experience of automated vehicles.We applied the proposed trust-based route planning approach tothe motivating example and obtained two routes: a trust-based routewhere human makes takeover decisions based on trust dynamicsand incidents, and a trust-free route (as a baseline for comparison)where human’s takeover decisions only depend on incidents. Weevaluated and compared the performance of these two routes viahuman subject experiments on a driving simulator. We conductedexperiments with 22 participants, who were randomly assignedto two equal-sized groups for the between-subject study (eachgroup has 11 participants, who took one of the two routes). Theexperimental results show that participants taking the trust-basedroute generally resulted in higher cumulative POMDP rewards andreported more positive responses in the after-driving survey thanthose taking the trust-free route.
Contributions.
We summarize the major contributions of thiswork as follows. • We developed the first trust-based route planning approachfor automated vehicles, which is based on a POMDP frame-work and uses data-driven models of trust dynamics andtakeover decisions. • We designed and conducted an online user study with 100participants on the Amazon Mechanical Turk platform tocollect data about users’ trust in automated driving. • We designed and conducted human subject experimentswith 22 participants on a driving simulator to evaluate theproposed approach, which showed encouraging results.
Paper organization.
The rest of the paper is organized as follows.We discuss the related work in Section 2, describe the motivatingexample in Section 3, present the trust-based route planning ap-proach in Section 4, describe the driving simulator experiments inSection 5, and draw conclusions in Section 6.
In this section, we survey the related work in two topics: (1) routeplanning for vehicles, and (2) trust in automation. For each topic,we identify gaps in the state-of-the-art and discuss the connectionwith this paper.
The goal of route planning is to compute the optimal routes forvehicles. The most commonly used metrics include distance, traveltime, and fuel consumption. Graph search algorithms such as Dijk-stra’s algorithm [14] and 𝐴 * algorithm [21] can be applied to findthe shortest distance path between any two locations. Computingthe fastest route (i.e., with the least travel time) is more challeng-ing than finding the shortest distance route. Kanoulas et al. [26]extended 𝐴 * algorithm by considering the speed change at differ-ent time of the day to compute the fastest route. Gonzalez et al.[19] developed an adaptive fastest route planning method basedon information learned from the historical traffic data, account-ing for various factors (e.g., road quality, weather condition, areacrime rate) that may influence vehicle speed patterns. Andersenet al. [7] proposed to find the most eco-friendly route by assigningeco-weights based on GPS and fuel consumption data.There are several recent studies considering personalized routerecommendation for various users. Campigotto et al. [9] developeda method for the personalized route planning by using Bayesianlearning to update users’ profile such as home location, work place,and mobility options. Dai et al. [13] recommended a personalized op-timal route considering user preferences encoded as a ratio betweendifferent metrics such as distance, travel time, and fuel consumption.Zhu et al. [41] proposed a personalized and time-sensitive routeplanning method, in which they inferred users’ preferences withlocations and visiting time through historical data.None of the aforementioned route planning methods considershuman trust. In this paper, we aim to fill this gap by developing atrust-based route planning approach. rust-Based Route Planning for Automated Vehicles ICCPS ’21, May 19–21, 2021, Nashville, TN, USA Trust in the context of human-technology relationships can beroughly classified into three categories: (1) credentials-based , whichis used mainly in security and determines if a user can be trustedbased on a set of credentials [25]; (2) experience-based , which in-cludes reputation-based trust in peer-to-peer and e-commerce ap-plications, determines an agent’s trust value based on its own ex-perience in predicting the probability of the execution of a certainaction by another agent [30]; and (3) cognitive trust , which explicitlyaccount for not only the human experience, but also subjective judg-ment about preferences and mental states [17]. In this paper, we areinterested in human’s trust in automated vehicles, and thereforeconsider cognitive trust that captures the human notion of trust.Specifically, we follow the notion of trust in automation proposedin [34], which indicates human’s willingness to rely on automation.Studies have found that human trust changes over time duringthe interaction with automation, affected by various factors such asthe automation’s reliability, predictability, and transparency [20, 38].Studies have also shown that trust can influence human’s relianceon automation and the system is likely to be under-utilized if humanmistrust the automation [16]. For example, a recent study found thatusers with lower trust tended to take over control from automatedvehicles more frequently [28]. Inspired by insights drawing fromthese prior studies, we develop a data-driven trust dynamics modelto represent the evolution of human trust in automated vehicles, anda takeover decision model to associate the likelihood of human’stakeover decision with trust.Different method to measure trust have been proposed. Userquestionnaires are commonly used to evaluate the subjective beliefof trust [36, 40]. For example, the study in [11] asked questionsabout users’ trust in automated vehicles in a 7-point Likert scale. Inaddition, various sensing technologies have been used for the con-tinuous measurement of human trust in real-time, including gazetracking [22], gestures (e.g., face touching and arms crossed) [35],and biometrics (e.g., electroencephalogram and galvanic skin re-sponse) [23]. We measure human trust in a 7-point Likert scale viaquestionnaires in the online user study, and via continuous usercontrol input (i.e., pressing buttons mounted on the steering wheel)in the driving simulator study.Existing works about trust in automated vehicles include in-vestigating factors that influence users’ adoption of automatedvehicles [32, 33, 39], studying the effect of alarm timing on dri-ver’s trust [5], designing forward collision warning system [29] andcruise control system [8] to improve users’ trust. By contrast, thispaper develops a route planning approach that accounts for trustto improve user experience of automated vehicles.Several recent works have explored the idea of modeling trustwith POMDPs. For example, a POMDP model for trust-workloaddynamics in Level 2 driving automation was developed in [6], anda POMDP-based method for human-robot collaboration in tablecleaning tasks was proposed in [10]. Our work is inspired by thesemethods. We focus on trust-based route planning for automatedvehicles, which requires different POMDP modeling.
Figure 1: An example map with three types of road incidents(pedestrian, obstacle, and oncoming truck).
We describe a motivating example of route planning for automatedvehicles. Figure 1 shows an example map, where three types oftypical incidents that may occur on the road are considered: (1)a pedestrian crossing the road, (2) an obstacle ahead of the lane,and (3) an oncoming truck in the neighboring lane. We can easilygeneralize to more complex examples with a richer set of incidents.For simplicity, we assume that each road segment may have upto one incident at a time. We also assume that the vehicle hasinformation about the potential incident that it may encounter inthe next road segment. Such information can be easily obtained, forexample, via sensing and crowd sourcing traffic monitoring apps.Figure 2 shows a schematic view of the automated vehicle trav-eling from one location to another. Suppose that the vehicle isapproaching an incident in the autopilot mode. Due to safety con-cerns, the driver may decide to take over control of the vehicle andswitch to manual driving. Such takeover decisions can be influ-enced by the driver’s trust in the automated vehicle’s capability ofhandling different types of incidents: the driver with lower trustis more likely to take over. In addition, the driver’s trust evolvesover time depending on the takeover decision and the vehicle’scapability of handling an incident.
The goal of this work is to develop a trust-based route planningapproach that computes an optimal route for the automated vehicle(e.g., navigating from A to K in the example map) while taking intoaccount human trust dynamics and takeover decisions.
We present a trust-based route planning approach for automatedvehicles. The key idea is to model the human-vehicle interactionas a POMDP and compute the optimal vehicle route by solving theoptimal policy of POMDP planning.
Formally, a POMDP is denoted as a tuple ( 𝑆, 𝐴, T , 𝑅, 𝑂, 𝛿, 𝛾 ) , where 𝑆 is a finite of state, 𝐴 is a set of actions, T is the transition functionrepresenting conditional transition probabilities between states, 𝑅 : 𝑆 × 𝐴 → R is the real-valued reward function, 𝑂 is a setof observations, 𝛿 is the observation function representing theconditional probabilities of observations given states and actions, CCPS ’21, May 19–21, 2021, Nashville, TN, USA Shili Sheng, Erfan Pakdamanian, Kyungtae Han, Ziran Wang, John Lenneman, and Lu Feng
Figure 2: A schematic view of an automated vehicle navigating from one location to another. When approaching an incident,the driver may decide to take over and switch to manual driving. The takeover decision can be influenced by the driver’s trustin the automated vehicle, which evolves over time.Figure 3: The POMDP graphical model for trust-based routeplanning. (Each node represents a state variable. Shadowednodes are partially observable variables. Squares representactions. Arrows represent transition functions.) and 𝛾 ∈ [ , ] is the discount factor. At each time step 𝑡 , given anaction 𝑎 𝑡 ∈ 𝐴 , a state 𝑠 𝑡 ∈ 𝑆 evolves to 𝑠 𝑡 + ∈ 𝑆 with probability T ( 𝑠 𝑡 + | 𝑠 𝑡 , 𝑎 𝑡 ) . The agent receives a reward 𝑅 ( 𝑠 𝑡 , 𝑎 𝑡 ) , and makes anobservation 𝑜 𝑡 + ∈ 𝑂 about the next state 𝑠 𝑡 + with probability 𝛿 ( 𝑜 𝑡 + | 𝑠 𝑡 + , 𝑎 𝑡 ) . The goal of POMDP planning is to compute theoptimal policy that chooses actions to maximize the expectation ofthe cumulative reward E [ (cid:205) ∞ 𝑡 = 𝛾 𝑡 𝑅 ( 𝑠 𝑡 , 𝑎 𝑡 )] .Figure 3 illustrates a graphical model of the proposed POMDPframework for trust-based route planning. We factor the state 𝑠 𝑡 at time 𝑡 into four variables: 𝑣 𝑡 represents the vehicle position, 𝑖 𝑡 represents the road incident, 𝑦 𝑡 represents the automated vehi-cle’s capability of safely handling the incident, and 𝑢 𝑡 is a partiallyobservable variable representing human’s trust in the automatedvehicle (because trust is a hidden human mental state that cannotbe directly observed by the vehicle agent). We factor the action 𝑎 𝑡 at time 𝑡 into two variables: the vehicle route choice 𝑐 𝑡 and thehuman’s takeover decision ℎ 𝑡 . Given the vehicle’s current position 𝑣 𝑡 and the route choice action 𝑐 𝑡 , we can determine the next vehicleposition 𝑣 𝑡 + by the transition function T ( 𝑣 𝑡 + | 𝑣 𝑡 , 𝑐 𝑡 ) . The potentialincident 𝑖 𝑡 that the vehicle may encounter is determined by the vehi-cle position with probability T ( 𝑖 𝑡 | 𝑣 𝑡 ) , and the automated vehicle’scapability of safely handling the incident 𝑖 𝑡 is given by T ( 𝑦 𝑡 | 𝑖 𝑡 ) .As discussed in Section 2, trust in automation can be influenced bymany factors. Here, we model the evolution of trust dynamics witha probabilistic transition function T ( 𝑢 𝑡 + | 𝑢 𝑡 , 𝑦 𝑡 , 𝑖 𝑡 , ℎ 𝑡 ) , based on a simplified assumption that trust evolves depending on the takeoverdecision and the vehicle’s capability of handling an incident. Theintuition is that trust may increase when the human chooses tonot take over and witnesses the automated vehicle successfullyhandling an incident, and the trust may decrease if the automatedvehicle fails to handle an incident.The vehicle agent does not know about human’s actual takeoveraction in advance, and it computes the optimal POMDP policy 𝜋 ∗ of route choices 𝑐 𝑡 based on a model that predicts human’s takeoverdecision ℎ 𝑡 . We consider two different takeover decision models forcomparison: (1) trust-free model , denoted by 𝜋 ℎ ( ℎ 𝑡 | 𝑖 𝑡 , 𝑦 𝑡 ) , wherehuman decides whether to takeover depending on the incidentand a fixed belief on the automated vehicle’s capability to handlecertain types of incidents; and (2) trust-based model , denoted by 𝜋 ℎ ( ℎ 𝑡 | 𝑖 𝑡 , 𝑦 𝑡 , 𝑢 𝑡 ) , where human make takeover decisions based onthe incident and trust, indicating that human’s belief on the au-tomated vehicle’s capability changes over time depending on thetrust dynamics.Considering the motivating example described in Section 3. Thevehicle position 𝑣 𝑡 is one of the locations { 𝐴, 𝐵, . . . , 𝐾 } shown inthe map (Figure 1). The incident 𝑖 𝑡 can take one of the four values:null, pedestrian, obstacle, and truck. The vehicle’s capability 𝑦 𝑡 ofhandling incidents has binary outcomes: success, and failure. Sincehuman’s trust is a partially observable variable 𝑢 𝑡 representing thehidden mental state, we use an observation variable ˆ 𝑢 𝑡 to representthe subjective trust in a 7-point Likert scale (1 and 7 indicate thelowest and highest levels of trust, respectively) measured via userquestionnaires. The available route choices 𝑐 𝑡 are given by the map.For example, in location 𝐴 , the vehicle may choose one of the threeroutes colored in yellow, red and green to navigate to 𝐵 , 𝐶 and 𝐷 ,respectively. The human takeover decision ℎ 𝑡 is a binary choice ofwhether or not to take over control of the vehicle and resume man-ual driving. We can define the transition functions T ( 𝑣 𝑡 + | 𝑣 𝑡 , 𝑐 𝑡 ) and T ( 𝑖 𝑡 | 𝑣 𝑡 ) based on the map. We can estimate T ( 𝑦 𝑡 | 𝑖 𝑡 ) based onthe historical testing logs of the automated vehicle safely handlingincidents. For the motivating example, we assume that the auto-mated vehicle can always safely handle incidents (but the humandriver has no prior knowledge about this assumption).We design a reward function shown in Table 1 for the motivatingexample. Intuitively, we want to reward for better safety and userexperience of automated vehicles. If the automated vehicle handlesan incident successfully, we assign positive rewards based on thedifficulty of driving tasks. When approaching a pedestrian incident,the automated vehicle needs to stop before the crosswalk and wait rust-Based Route Planning for Automated Vehicles ICCPS ’21, May 19–21, 2021, Nashville, TN, USA Figure 4: Screenshots of driving videos used in the online user study, covering three types of incidents: (a) a pedestrian crossingthe road, (b) an obstacle (a stopped truck) ahead of the lane, (c) an oncoming truck in the neighboring lane. Each sub-figureshows: (top) the driver’s view when the automated vehicle is approaching the incident, (middle) the view of autonomousdriving if the driver chooses to not take over, (bottom) the view of manual driving if the driver chooses to take over.Table 1: Reward function for the motivating example
Pedestrian Obstacle TruckAutopilot (Success) 3 2 1Autopilot (Failure) -9 -6 0Manual driving 0 0 0until the pedestrian crossing the road. When approaching an obsta-cle incident, the automated vehicle needs to perform lane changingin order to avoid collision with the obstacle. When there is an on-coming truck in the neighboring lane, the automated vehicle needsto keep driving in the same lane. Thus, we rank the pedestrianincident as the most difficult task and assign the highest rewardvalue of 3, followed by the obstacle incident with a reward valueof 2 and the truck incident with a reward value of 1. On the otherhand, if the automated vehicle fails to handle an incident safely,we assign rewards based on the severity of incidents (e.g., strikinga pedestrian can cause more serious damages than colliding withan obstacle). We assign zero reward to manual driving, becausewe want to promote better user experience and let the driver toenjoy non-driving tasks (e.g., reading or using mobile devices) inthe automated vehicle. In addition, we assign a reward value of 5 to empty road (i.e., no incident thus no failure or takeover) to indicatethis as the most favorable choice.For the rest of this section, we describe the design of an onlineuser study for data collection in Section 4.2; we present a data-driven method to estimate trust dynamics
T ( 𝑢 𝑡 + | 𝑢 𝑡 , 𝑦 𝑡 , 𝑖 𝑡 , ℎ 𝑡 ) andthe observation function 𝛿 ( ˆ 𝑢 𝑡 | 𝑢 𝑡 ) in Section 4.3; we describe thedata-driven modeling of trust-free takeover decision 𝜋 ℎ ( ℎ 𝑡 | 𝑖 𝑡 , 𝑦 𝑡 ) and the trust-based takeover decision 𝜋 ℎ ( ℎ 𝑡 | 𝑖 𝑡 , 𝑦 𝑡 , 𝑢 𝑡 ) in Section 4.4;and finally, we apply the proposed approach to the motivatingexample and present the computed optimal routes in Section 4.5. We designed and conducted an online user study with 100 anony-mous participants on the Amazon Mechanical Turk platform. Theobjective of this study is to collect data about human’s trust onautomated vehicles. In particular, we investigated how trust evolveswith respect to different incidents on the road and how human’stakeover decisions are affected by incidents and trust. We createda set of driving videos using the PreScan driving simulation soft-ware [1]. Figure 4 shows the screenshots of example videos coveringthree types of incidents (i.e., pedestrian, obstacle, and oncomingtruck) used in the motivating example. This study was approved by the Institutional Review Board at the University ofVirginia.
CCPS ’21, May 19–21, 2021, Nashville, TN, USA Shili Sheng, Erfan Pakdamanian, Kyungtae Han, Ziran Wang, John Lenneman, and Lu Feng
During the online user study, we first established the baselineby asking participants about their trust in automated vehicles in a7-point Likert scale (i.e., trust ranges from 1 to 7). Then, we showeda video of the automated vehicle approaching an incident on theroad from the driver’s view, and asked participants if they wouldlike to takeover control of the vehicle and switch to manual driving,imaging that they were the driver siting inside the automated vehi-cle. Depending on the participant’s response of takeover or not, weshowed the next video of either the vehicle is driven autonomouslyor manually to handle the incident. After that, we asked partici-pants to fill in a questionnaire which estimates their updated trustin automated vehicle. We adapted the Muir’s questionnaire [37]and asked participants to answer the following questions in 7-pointLikert scale:(1) To what extent can you predict the automated vehicle’s be-havior from moment to moment?(2) To what extent can you count on the automated vehicle todo its job?(3) What degree of faith do you have that the automated vehiclewill be able to cope with similar incidents in the future?(4) Overall how much do you trust the automated vehicle?We averaged a participant’s responses to these four questions into asingle rating between 1 and 7 to represent the participant’s updatedtrust. We repeated the above process nine times (three times perincident type) with a randomized order of incidents.We did not include any vehicle crash video in this study, becausewe assume that the automated vehicle is capable of handling allincidents safely. For example, the vehicle would automatically stopand wait for the pedestrian to cross the lane, or change the laneto avoid the obstacle. However, participants are not aware of suchinformation in advance. They make takeover decisions based ontheir trust beliefs about the automated vehicle’s capability to safelyhandle certain incident, and the trust levels may change based ontheir experience of watching prior incident videos.The data we collected from each participant has the followingformat: D = { ˆ 𝑢 , 𝑖 , ℎ , ˆ 𝑢 , . . . , 𝑖 , ℎ , ˆ 𝑢 } , where ˆ 𝑢 𝑡 is the measureduser trust, 𝑖 𝑡 is the incident type, ℎ 𝑡 is the user decision of takeoveror not, at each time step 𝑡 . In order to guarantee the data quality, ourstudy recruitment criteria required that participants must be ableto read English fluently and have an above 95% approval rate onthe Amazon Mechanical Turk platform. We also inserted questionsfor attention checks during the user study. As described in Section 4.1, the proposed POMDP framework fortrust-based route planning represents human trust as a partiallyobservable variable 𝑢 𝑡 at time step 𝑡 , which evolves to 𝑢 𝑡 + overtime depending on human’s takeover decision ℎ 𝑡 and the automatedvehicle’s capability 𝑦 𝑡 to handle incident 𝑖 𝑡 . Using the data collectedfrom the online user study described in Section 4.2, we model thetrust dynamics and the POMDP observation function as a linearGaussian system: T ( 𝑢 𝑡 + | 𝑢 𝑡 , 𝑦 𝑡 , 𝑖 𝑡 , ℎ 𝑡 ) = N ( 𝛼 𝑡 𝑢 𝑡 + 𝛽 𝑡 , 𝜎 𝑡 ) ˆ 𝑢 𝑡 ∼ N ( 𝑢 𝑡 , 𝜎 𝑢 ) Figure 5: Visualization of probabilistic transition matricesof the learned trust dynamics model, where 𝑢 𝑡 and 𝑢 𝑡 + areshown as trust before and trust after values ranging from 1to 7, and each matrix corresponds to a pair of incident andtakeover decision. where N ( 𝜇, 𝜎 ) represents the Gaussian distribution with the mean 𝜇 and the variance 𝜎 ; 𝛼 𝑡 and 𝛽 𝑡 are linear coefficients of trustdynamics given 𝑦 𝑡 , 𝑖 𝑡 and ℎ 𝑡 ; and ˆ 𝑢 𝑡 represents the observationsof trust measured via subjective questionnaires in the online userstudy. We estimate these parameter values using full Bayesianinference with Hamiltonian Monte Carlo sampling algorithm [15].Figure 5 illustrates a visualization of the learned trust dynamicsmodel. There are six probabilistic transition matrices, correspond-ing to all combinations of three road incidents and binary humantakeover decisions. Each transition matrix indicates the probabilityof changing from 𝑢 𝑡 (trust before value) to 𝑢 𝑡 + (trust after value).We observe that trust values are more likely to increase when hu-man decides to not take over (top row of Figure 5), while trustvalues tend to be constant or decrease when there is a takeover de-cision (bottom row of Figure 5). These observations are consistentwith the insight from the prior studies (see Section 2) that takeoverdecisions are often correlated to trust. In the POMDP framework, we use the variable ℎ 𝑡 to denote human’stakeover decisions (i.e., whether or not to take over control ofthe vehicle) when approaching an incident 𝑖 𝑡 at time step 𝑡 . Suchtakeover decisions may also be influenced by human trust 𝑢 𝑡 . Inthe following, we present two takeover decision models based onwhether or not to consider trust as an influencing factor. Trust-free takeover decision model.
Let 𝑏 𝑖 denote human’s be-lief on the automated vehicle’s capability of safely handling anincident 𝑖 , which remains constant in the trust-free model. Let 𝑝 𝑡 denote the probability of human deciding to not take over at timestep 𝑡 . We define 𝑝 𝑡 = S( 𝑏 𝑖 𝑟 s ,𝑖 + ( − 𝑏 𝑖 ) 𝑟 f ,𝑖 ) , where S( 𝑥 ) = + 𝑒 − 𝑥 is the sigmoid function, 𝑟 s ,𝑖 and 𝑟 f ,𝑖 are rewards of the automatedvehicle handling the incident 𝑖 with success and failure (see Table 1),respectively. We model the takeover decision with a Bernoulli dis-tribution, denoted by ℎ 𝑡 ∼ B( 𝑝 𝑡 ) . rust-Based Route Planning for Automated Vehicles ICCPS ’21, May 19–21, 2021, Nashville, TN, USA Figure 6: Predictions of takeover likelihood with respectto trust and incidents, using trust-based and trust-freetakeover decision models.Trust-based takeover decision model.
Let 𝑏 𝑖𝑡 denote human’sbelief on the automated vehicle’s capability of safely handling anincident 𝑖 at time step 𝑡 , which evolves over time depending on thehuman trust 𝑢 𝑡 . Thus, we model the belief as a sigmoid function 𝑏 𝑖𝑡 = S( 𝜅 𝑖 𝑢 𝑡 + 𝜆 𝑖 ) , where 𝜅 𝑖 and 𝜆 𝑖 are linear coefficients associated withthe incident 𝑖 . We assume that the human trust 𝑢 𝑡 follows a Gaussiandistribution, denoted by ˆ 𝑢 𝑡 ∼ N ( 𝑢 𝑡 , 𝜎 𝑢 ) where ˆ 𝑢 𝑡 are the measuredtrust values from the online user study. We define the probabilityof human deciding to not takeover as 𝑝 𝑡 = S( 𝑏 𝑖𝑡 𝑟 s ,𝑖 + ( − 𝑏 𝑖𝑡 ) 𝑟 f ,𝑖 ) ,which is defined similarly to the trust-free model, but using thedynamic belief 𝑏 𝑖𝑡 instead of the constant 𝑏 𝑖 . Finally, the takeoverdecision is given by the Bernoulli distribution ℎ 𝑡 ∼ B( 𝑝 𝑡 ) . Data-driven modeling results.
We applied the full Bayesian in-ference with Hamiltonian Monte Carlo sampling algorithm [15] toestimate parameters in both the trust-free and trust-based models,using the data collected from the online user study. The results oflog-likelihood show that the trust-based model (-359.37) fits bet-ter to the collected data as opposed to trust-free model (-446.83).The difference in log-likelihood results shows that accounting fortrust in the takeover decision model can achieve better predictionperformance, which supports our assumption that human takeoverdecisions is influenced by trust. Figure 6 shows model predictionsof takeover probability with respect to trust and incidents. With thetrust-free model, since the takeover decision does not depend on hu-man trust, we observe three straight lines for three incidents. Withthe trust-based model, we observe the general trends of decreas-ing takeover likelihood with increasing trust, which are consistentwith findings in the prior studies (see Section 2). Furthermore, weobserve from the results of both models that it is more likely forhuman to decide to take over with riskier incidents: pedestrian withthe highest takeover probability, followed by obstacle and truck.
We applied the Approximate POMDP Planning (APPL) Toolkit [2],which is an implementation of the point-based SARSOP algorithmfor efficient POMDP planning [31], to compute the optimal policiesof the proposed POMDP framework. For the motivating example,
Figure 7: Driving simulator setup. The top zoomed-in viewshows the GUI displaying the driver’s current trust value,along with other information such as driving mode, veloc-ity, gear, incident alarm, vehicle action. The bottom zoomed-in view shows the steering wheel with buttons for takeovercommands and user trust input. depending on the use of trust-based and trust-free takeover decisionmodels, we obtained two optimal routes: • trust-based route: A-D-G-J-K • trust-free route: A-C-E-H-KNote that the main difference between these two routes is the orderof road incidents. In the trust-based route, the ordered incidentsoccurring in each road segment are oncoming truck (A-D), null (D-G), obstacle (G-J), and pedestrian (J-K). In the trust-free route, theincidents follows the order of pedestrian (A-C), null (C-E), obstacle(E-H), and oncoming truck (H-K). We evaluate and compare theperformance of these two routes via human subject experiments on a driving simulator, as described in the next section. We describe the design, procedure, and results of our driving simu-lator experiments as follows.
Apparatus.
Figure 7 shows the driving simulator setup used for theexperiments. The hardware platform is based on the Force Dynam-ics 401CR driving simulator, which is a four-axis motion platformthat tilts and rotates to simulate the experience of being in a vehicle.The platform includes the seat, interlocked seat belt, interlockeddoors, display screen, steering wheel, brake, paddle shifters, andthrottle. There are two buttons mounted on the steering wheel (bot-tom zoomed-in view in Figure 7). We programmed the simulator’scontrol input such that the driver can switch between automatedand manual driving by pressing the two buttons simultaneously. Inaddition, we used the same set of buttons to measure participants’trust in automated vehicles during the experiments. The driver can This human subject study was approved by the Institutional Review Board at theUniversity of Virginia.
CCPS ’21, May 19–21, 2021, Nashville, TN, USA Shili Sheng, Erfan Pakdamanian, Kyungtae Han, Ziran Wang, John Lenneman, and Lu Feng press the left ( resp. right) button to decrease ( resp. increase) thetrust value ranging from 1 to 7.
Driving scenario.
We created a driving scenario based on themotivating example described in Section 3, using the PreScan driv-ing simulation software [1]. We also programmed an autopilotcontroller for the simulated automated vehicle, which has the ca-pability of leveraging the integrated sensors (e.g., radar, Lidar, andGPS) in PreScan for various driving tasks such as lane keeping,detecting and handling incidents.
Manipulated factor.
We manipulate a single factor: the route thatthe autopilot controller follows. As stated in Section 4.5, the twoconditions are: trust-based route and trust-free route.
Dependent measures.
We are interested in studying the routewhich brings more cumulative reward. We recorded the partici-pants’ takeover decisions and calculated the cumulative POMDPreward using the reward function defined in Table 1.
Hypothesis.
We hypothesize that participants taking the trust-based route can obtain higher cumulative POMDP rewards thanthose taking the trust-free route.
Subject allocation.
We recruited 22 participants (average age: 23.7years, SD=4.3 years, 31.8% female) from the university community.Each participant was compensated with a $20 gift card for com-pleting the experiment. The recruitment criteria required all par-ticipants to have a valid driver license, at least one year of drivingexperience, and normal or corrected-to-normal vision. To avoidparticipants’ bias, we adopted a between-subject study design: werandomly allocated 11 participants to take the trust-based routeand the other 11 participants to experience the trust-free route.
Upon arrival, a participant was instructed to read and sign a consentform approved by the Institutional Review Board. We conducted afive-minute training to help the participant get familiar with thedriving simulator setup. Then, the participant was instructed todrive through the trust-based or trust-free route with the simu-lated automated vehicle, depending on the assigned study group.The journey started in the autopilot mode. When the vehicle ap-proached an incident (i.e., pedestrian, obstacle, or truck), it alertedthe participant by issuing an auditory alarm and displaying textualinformation about the incident type in the GUI. If the participantdecided to not takeover, the vehicle would continue in the autopilotmode to handle the incident. The participant can take over controlof the vehicle and switch to manual driving at any point duringthe experiment. If the participant did takeover, he was required toswitch back to the autopilot mode after the vehicle passing thatincident. We asked the participant to periodically record their trustin the automated vehicle using the buttons on the steering wheel(see bottom left in Figure 7). After the driving session, we askedthe participant to answer the following survey questions in 7-pointLikert scale (1 means strongly disagree, 4 is neural, 7 means stronglyagree).Q1 I believe that the automated vehicle can get me to the desti-nation safely.Q2 I find the route easy to drive.Q3 I find it easy to take over control of the automated vehicle.
Figure 8: The cumulative rewards of participants takingtrust-based and trust-free routes.Figure 9: Participants’ average takeover likelihood when thevehicle approaching different incidents in the trust-basedand trust-free routes.
Q4 I have concern about using the automated vehicle to drivethrough this route.Q5 I believe that the selected route is not dangerous.Q6 I think the selected route fits well with the way I would liketo drive.Q7 I can depend on the reliability of the automated vehicle.It took about 40 minutes for each participant to complete the entireexperiment.
We calculated the cumulative POMDP rewards (using the rewardfunction defined in Table 1) for each participant, based on theirtakeover decisions when approaching incidents along the route.Figure 8 shows the box plot of cumulative rewards of all participants.We observe that participants taking the trust-based route tend toachieve higher cumulative rewards than participants taking thetrust-free route, which is consistent with our study hypothesis. Wealso performed one-way analysis of variance (ANOVA) to evaluatethis hypothesis, i.e, comparing the observed 𝐹 -test statistics with 𝐹 ( 𝑑 , 𝑑 ) ( 𝐹 -distribution with between-group degree of freedom 𝑑 and within-group degree of freedom 𝑑 ). The observed statistics 𝐹 ( , ) = .
14 is greater than the critical value at significance level0.01. Thus, our study hypothesis is supported by ANOVA resultsstatistically.Figure 9 shows the average takeover likelihood of all participants,for different incidents along the two routes. It is not surprising tofind that participants are more likely to take over in the trust-freeroute than the trust-based route. With both routes, participants havehigher probabilities to take over when approaching a pedestrianthan an obstacle, while none of them choose to take over the control rust-Based Route Planning for Automated Vehicles ICCPS ’21, May 19–21, 2021, Nashville, TN, USA
Figure 10: The evolution of participants’ average trust alongthe trust-based and trust-free route. (The shadow representsthe 95% confidence interval.)Figure 11: After-driving survey results. (Each box plot showsthe maximum, the first quartile, the median, the third quar-tile, and the minimum. Each dot represents an outlier.) when there was an oncoming truck in the neighboring lane. Apossible explanation is that participants are more likely to take overwhen approaching incidents that are more challenging to handle orcan cause more severe damages. These trends are consistent withthe takeover predictions computed using the online user study data(see Figure 6).Figure 10 shows how participants’ average trust in the automatedvehicle evolves as they were driving through different locationsalong the two routes. For the trust-based route, we observe that theaverage trust increases in the route segment A-D, this may resultfrom the automated vehicle successfully handling the incident ofoncoming truck in this segment. The trust continues to increasein the segment D-G, which is an empty road without any incident.However, the trust decreases in the next segment G-J where thevehicle needs to change lane to avoid an obstacle, and the trustfurther decreases in the last segment J-K where the vehicle needs tostop and wait for a pedestrian to cross the road. The decreasing ofaverage trust may be explained by the occurring of more challeng-ing and riskier incidents. For the trust-free route, we observe thatthe average trust drops sharply in the first route segment A-C withan pedestrian incident. However, the trust continues to increaseslowly for the rest of the route. The average trust of participantstaking the trust-based route is generally higher than taking thetrust-free route.Figure 11 summarizes the participants’ responses to the after-driving survey questions. The results of Q1 indicate that partici-pants experienced the trust-based route had higher belief in the automated vehicle’s capability of driving safely than participantsexperienced the trust-free route. The results of Q2 show that partic-ipants found the trust-based route easier to drive than the trust-freeroute. The results of Q3 illustrate that participants driving throughthe trust-based route found it easier to take over control of the vehi-cle than those driving through the trust-free route. The results of Q4show that participants experienced the trust-based route had lessconcern about the automated vehicle than those experienced thetrust-free route. The results of Q5 indicate that participants tendedto have a neutral opinion about how dangerous the routes are. Theresults of Q6 show that participants thought the trust-based routefits to the way they would like to drive better than the trust-freeroute in general. The results of Q7 find that participants drivingthrough the trust-based route perceived higher reliability of theautomated vehicle than those experienced the trust-free route. Insummary, our human subject experimental results show that • Participants taking the trust-based route generally resultedin higher cumulative POMDP rewards (where the rewardfunction was designed to promote better safety and userexperience of automated vehicles) than those taking thetrust-free route. • Participants were more likely to take over in the trust-freeroute than in the trust-based route; and riskier incidents ledto higher takeover likelihood. • Participants’ trust in the automated vehicle evolved overtime during the driving experience and was influenced bydifferent types of incidents. • Participants experienced the trust-based route had morepositive responses in the after-driving survey than thosedriving through the trust-free route.
In this paper, we present a trust-based route planning approach forautomated vehicles. We model the human-vehicle interaction as aPOMDP and compute optimal routes for the vehicle by solving thePOMDP planning. In order to incorporate trust into the route plan-ning, we build data-driven models of trust dynamics and takeoverdecisions using data collected from an online user study with 100participants on the Amazon Mechanical Turk platform. We appliedthe proposed trust-based route planning approach to a motivatingexample and obtained a trust-based route and a trust-free route (as abaseline for comparison). We evaluated these two routes via humansubject experiments with 22 participants on a driving simulator.The results show that participants taking the trust-based routegenerally resulted in higher cumulative POMDP rewards (wherethe reward function was designed to promote better safety anduser experience of automated vehicles), were less likely to takeover control of the vehicle, and reported more positive responsesin the after-driving survey than those taking the trust-free route.In addition, we observed that participants’ trust changed over timeduring the study and was influenced by different road incidents.These observations are consistent with the findings of prior studies.This work makes the first step towards incorporating humantrust into route planning for automated vehicles. There are a fewdirections for future work. First, we would like to evaluate thescalability of the proposed approach. We believe that the proposed
CCPS ’21, May 19–21, 2021, Nashville, TN, USA Shili Sheng, Erfan Pakdamanian, Kyungtae Han, Ziran Wang, John Lenneman, and Lu Feng
POMDP-based approach can be applied to larger route planningproblems (e.g., larger maps, more locations, and more route choices).However, the bottleneck lies in the evaluation. We will need to de-sign and conduct new human subject experiments to evaluate theresulting routes of each problem, which can be costly and timeconsuming. Second, we would like to consider a richer set of inci-dent types to reflect the complex road conditions that automatedvehicles may encounter in the real-world. We will need to designand conduct new online user studies to collect data about trustin the automated vehicle’s capability of safely handling these newincident types and build new data-driven trust dynamics model. Fur-thermore, we would like to explore the POMDP modeling of otherfactors that may influence human’s trust in automated vehicles,such as the system transparency and predictability.
ACKNOWLEDGMENTS
This work was supported in part by National Science Foundationgrants CCF-1942836 and CNS-1755784. Any opinions, findings, andconclusions or recommendations expressed in this material arethose of the author(s) and do not necessarily reflect the views ofthe grant sponsors.
REFERENCES
Transportation Research Part F: Traffic Psychology andBehaviour
7, 4-5 (2004), 307–322.[6] Kumar Akash, Neera Jain, and Teruhisa Misu. 2020. Toward Adaptive TrustCalibration for Level 2 Driving Automation. arXiv preprint arXiv:2009.11890 (2020).[7] Ove Andersen, Christian S Jensen, Kristian Torp, and Bin Yang. 2013. Ecotour:Reducing the environmental footprint of vehicles using eco-routes. In , Vol. 1. IEEE, 338–340.[8] Béatrice Cahour and Jean-François Forzy. 2009. Does projection into use improvetrust and exploration? An example with a cruise control system.
Safety science
47, 9 (2009), 1260–1270.[9] Paolo Campigotto, Christian Rudloff, Maximilian Leodolter, and Dietmar Bauer.2016. Personalized and situation-aware multimodal route recommendations: theFAVOUR algorithm.
IEEE Transactions on Intelligent Transportation Systems
18, 1(2016), 92–102.[10] Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa.2018. Planning with trust for human-robot collaboration. In
Proceedings of the2018 ACM/IEEE International Conference on Human-Robot Interaction . 307–315.[11] Jong Kyu Choi and Yong Gu Ji. 2015. Investigating the importance of trust onadopting an autonomous vehicle.
International Journal of Human-ComputerInteraction
31, 10 (2015), 692–702.[12] SAE On-Road Automated Vehicle Standards Committee et al. 2018. Taxonomyand definitions for terms related to driving automation systems for on-road motorvehicles.
SAE International: Warrendale, PA, USA (2018).[13] Jian Dai, Bin Yang, Chenjuan Guo, and Zhiming Ding. 2015. Personalized routerecommendation using big trajectory data. In . IEEE, 543–554.[14] Edsger W Dijkstra et al. 1959. A note on two problems in connexion with graphs.
Numerische mathematik
1, 1 (1959), 269–271.[15] Simon Duane, Anthony D Kennedy, Brian J Pendleton, and Duncan Roweth. 1987.Hybrid monte carlo.
Physics letters B
International journalof human-computer studies
58, 6 (2003), 697–718. [17] Rino Falcone and Cristiano Castelfranchi. 2001. Social trust: A cognitive approach.In
Trust and deception in virtual societies . Association for Computing Machinery, Inc, 794–805.[20] Peter A Hancock, Deborah R Billings, Kristin E Schaefer, Jessie YC Chen, Ewart JDe Visser, and Raja Parasuraman. 2011. A meta-analysis of factors affecting trustin human-robot interaction.
Human Factors
53, 5 (2011), 517–527.[21] Peter E Hart, Nils J Nilsson, and Bertram Raphael. 1968. A formal basis for theheuristic determination of minimum cost paths.
IEEE transactions on SystemsScience and Cybernetics
4, 2 (1968), 100–107.[22] Sebastian Hergeth, Lutz Lorenz, Roman Vilimek, and Josef F Krems. 2016. Keepyour scanners peeled: Gaze behavior as a measure of automation trust duringhighly automated driving.
Human factors
58, 3 (2016), 509–519.[23] Wan-Lin Hu, Kumar Akash, Neera Jain, and Tahira Reid. 2016. Real-Time Sensingof Trust in Human-Machine Interactions.
IFAC-PapersOnLine
49, 32 (2016), 48–53.[24] Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. 1998. Plan-ning and acting in partially observable stochastic domains.
Artificial intelligence
Computer
34, 12 (2001), 154–157.[26] Evangelos Kanoulas, Yang Du, Tian Xia, and Donghui Zhang. 2006. Finding fastestpaths on a road network with speed patterns. In . IEEE, 10–10.[27] Kanwaldeep Kaur and Giselle Rampersad. 2018. Trust in driverless cars: In-vestigating key factors influencing the adoption of driverless cars.
Journal ofEngineering and Technology Management
48 (2018), 87–96.[28] Moritz Körber, Eva Baseler, and Klaus Bengler. 2018. Introduction matters:Manipulating trust in automation and reliance in automated driving.
Appliedergonomics
66 (2018), 18–31.[29] Arnaud Koustanaï, Viola Cavallo, Patricia Delhomme, and Arnaud Mas. 2012.Simulator training with a forward collision warning system: Effects on driver-system interactions and driver trust.
Human factors
54, 5 (2012), 709–721.[30] Karl Krukow, Mogens Nielsen, and Vladimiro Sassone. 2008. Trust models inubiquitous computing.
Philosophical Transactions of the Royal Society of LondonA: Mathematical, Physical and Engineering Sciences
Robotics: Science and systems , Vol. 2008. Zurich, Switzerland.[32] John D Lee and Kristin Kolodge. 2019. Exploring trust in self-driving vehiclesthrough text analysis.
Human factors (2019), 0018720819872672.[33] John D Lee, Shu-Yuan Liu, Joshua Domeyer, and Azadeh DinparastDjadid. 2019.Assessing Drivers’ Trust of Automated Vehicle Driving Styles With a Two-PartMixed Model of Intervention Tendency and Magnitude.
Human factors (2019),0018720819880363.[34] John D Lee and Katrina A See. 2004. Trust in automation: Designing for appro-priate reliance.
Human factors
46, 1 (2004), 50–80.[35] Jin Joo Lee, Brad Knox, and Cynthia Breazeal. 2011.
Modeling the Dynamics ofNonverbal Behavior on Interpersonal Trust for Human-Robot Interactions.
Ph.D.Dissertation. Massachusetts Institute of Technology, School of Architecture andPlanning, Program in Media Arts and Sciences.[36] Nikolas Martelaro, Victoria C Nneji, Wendy Ju, and Pamela Hinds. 2016. Tell memore: Designing hri to encourage more trust, disclosure, and companionship.In
The Eleventh ACM/IEEE International Conference on Human Robot Interaction .IEEE Press, 181–188.[37] Bonnie Marlene Muir. 2002. Operators’ trust in and use of automatic controllersin a supervisory process control task. (2002).[38] Kristin E Schaefer, Jessie YC Chen, James L Szalma, and Peter A Hancock. 2016.A meta-analysis of factors influencing the development of trust in automation:Implications for understanding autonomy in future systems.
Human factors
58, 3(2016), 377–400.[39] Shili Sheng, Erfan Pakdamanian, Kyungtae Han, BaekGyu Kim, Prashant Tiwari,Inki Kim, and Lu Feng. 2019. A case study of trust on autonomous driving. In . IEEE, 4368–4373.[40] Anqi Xu and Gregory Dudek. 2015. Optimo: Online probabilistic trust inferencemodel for asymmetric human-robot collaborations. In
Proceedings of the TenthAnnual ACM/IEEE International Conference on Human-Robot Interaction . ACM,221–228.[41] Xiaoyan Zhu, Ripei Hao, Haotian Chi, and Xiaojiang Du. 2017. Fineroute: Per-sonalized and time-aware route recommendation based on check-ins.