Augmented Reality-Based Advanced Driver-Assistance System for Connected Vehicles
AAugmented Reality-Based Advanced Driver-Assistance System forConnected Vehicles
Ziran Wang ∗ , Kyungtae Han, and Prashant TiwariToyota Motor North America R&D, InfoTech Labs, Mountain View, CA, USA { ziran.wang, kyungtae.han, prashant.tiwari } @toyota.com Abstract —With the development of advanced communicationtechnology, connected vehicles become increasingly popular in ourtransportation systems, which can conduct cooperative maneuverswith each other as well as road entities through vehicle-to-everything communication. A lot of research interests have beendrawn to other building blocks of a connected vehicle system, suchas communication, planning, and control. However, less researchstudies were focused on the human-machine cooperation andinterface, namely how to visualize the guidance information to thedriver as an advanced driver-assistance system (ADAS). In thisstudy, we propose an augmented reality (AR)-based ADAS, whichvisualizes the guidance information calculated cooperatively bymultiple connected vehicles. An unsignalized intersection scenariois adopted as the use case of this system, where the drivercan drive the connected vehicle crossing the intersection underthe AR guidance, without any full stop at the intersection. Asimulation environment is built in Unity game engine basedon the road network of San Francisco, and human-in-the-loop(HITL) simulation is conducted to validate the effectiveness of ourproposed system regarding travel time and energy consumption.
I. I
NTRODUCTION
The emergence of connected vehicle technology during thepast decades brings many new possibilities to our existing trans-portation systems. Specifically, the level of connectivity withinour vehicles has greatly increased, allowing these “equipped”vehicles to behave in a cooperative manner not only amongthemselves through vehicle-to-vehicle (V2V) communication,but also with other transportation entities through vehicle-to-infrastructure (V2I) communication, vehicle-to-cloud (V2C)communication, vehicle-to-pedestrian (V2P) communication,etc., namely vehicle-to-everything (V2X) communication.Many research works have widely studied various aspectsof the connected vehicle systems, such as communication,perception, localization, planning, and control, where each ofthem handles one or several tasks in the system [1]. The latterfour aspects are often studied in the autonomous driving domainas well, and the concept of connected and automated vehi-cles (CAV) emerges where vehicles can conduct cooperativeautomation maneuvers together. However, it is expected thatfull automation of our transportation systems will not happenanytime soon, due to the hurdles in both the technical sideand the liability side. Therefore, during the transition from noautomation to full automation in the mixed traffic environment,human-driven connected vehicles will play a crucial role giventhe rich information they can share through V2X communica-tion, as well as the ability to cooperate with other human-drivenconnected vehicles or CAVs.Therefore, the importance of studying the topic of human-machine cooperation arises, as the driver of a connected vehicle needs to know how to correctly interact with the vehicle tomaximize its full advantages [2]. One critical aspect of thistopic is to design the human-machine interface (HMI) of theadvanced driver-assistance system (ADAS), so that the infor-mation received through V2X communication can be visualizedto the driver, and guides him/her to driver the vehicle in a safer,more efficient, and more comfortable way.Connected vehicles have been well researched regardingthe planning and control aspects, even with numerous fieldimplementations conducted by real mass-produced vehicles.However, most of them designed the HMI of their ADAS asa simple visualization tool of the connected vehicles’ plan-ning and control modules, such as the driver guidance onthe connected eco-driving system [3], or the driver vehicleinterface design by U.S. Department of Transportation [4]. Onlya relatively small portion of them addressed the issue fromthe HMI perspective, which designed the connected vehicle’splanning and control modules according to the pattern of theHMI, making the integrated ADAS more informative while alsointuitive for the driver to operate.In this study, we propose an ADAS for connected vehiclesusing augmented reality (AR) as the HMI, which overlaysthe guidance information on driver’s field-of-view through thewindshield. A specific use case of unsignalized intersectionis studied, where connected vehicles (including CAVs) cancooperate with each other to cross intersections without anyfull stop, largely increasing the time efficiency and energyefficiency of vehicles. A slot reservation planning algorithmand a feedforward/feedback control algorithm are developed toserve the AR HMI of the ADAS. Unity game engine is usedto model the proposed system, and human-in-the-loop (HITL)simulation is conducted to validate the effectiveness of thissystem in the unsignaliezd intersection use case.The remainder of this paper is organized as follows: Sec-tion II introduces the problem statement of this study in agreater details. Section III develops different modules of thisADAS on connected vehicles, including the AR HMI designfor drivers, the slot reservation planning algorithm, and thefeedforward/feedback control algorithm. Section IV conductsthe modeling and evaluation works of this ADAS in Unitygame engine, with results in HITL simulation showing theeffectiveness of the system. Finally, the paper is concluded withsome future directions in section V.II. P
ROBLEM S TATEMENT
In this study, an ADAS is designed for connected vehicles,which includes various modules such as communication, local- a r X i v : . [ c s . H C ] A ug ig. 1: System architecture of the proposed AR-based ADAS for connected vehiclesization, perception, planning, control, and AR HMI. Althoughevery module is essential to the overall system architecture, wefocus on the latter three in this study. Connected vehicles in thisstudy can either be driven by human drivers with AR HMI, ordriven by automated controllers as CAVs. A Digital Twin (i.e.,cyber-physical) architecture is adopted for connected vehiclesin this study, where all connected vehicles in the physical worldare connected through the cyber world. The proposed coopera-tive maneuvers among connected vehicles does not specify orrequire any specific communication technology, which meansvehicles can potentially be connected with the cyber worldthrough Dedicated Short-Range Communications [5], CellularVehicle-to-Everything (C-V2X) [6], or a combination of both.The general architecture of the proposed ADAS can beillustrated as Fig. 1. In the physical world, connected vehiclesare equipped with different hardware modules that can provideinformation of themselves and the surrounding traffic. For ex-ample, CAN BUS provides speed, acceleration, and many otherdetailed information of the ego vehicle, while the localizationmodule provides its coordinate information. The perceptionmodule, on the other hand, provides surrounding informationto the ego vehicle, such as road geometry, detected objects, andtraffic condition. All these information is fed into the processorof the ego vehicle, which processes the data locally and sendsto the cyber world through V2X communication. The processoralso receives information from the cyber world and propagatesto the AR HMI for the driver-assistance purpose. It should benoted that, this study does not focus on the hardware setup ofthe ADAS, as long as the necessary information can be gatheredand provided to the planning, control, and HMI modules. In the cyber world, the planning and the control modulesplay important roles in the overall system. The planningmodule schedules different connected vehicles before theyconduct cooperative maneuvers (e.g., crossing the unsignalizedintersections in this use case), allowing connected vehicles toidentify their desired motions. The control module calculatesthe particular control commands to allow vehicles to achievetheir desired motions, and executed either by human drivers orautomated controllers.Compared to previous studies on AR or ADAS of connectedvehicles, the major contributions of this study are listed below: • Design the planning and control modules of the ADAS tobetter serve the HMI, making the system human-centered.
The HMIs in most existing ADAS simply visualize theinformation derived from other modules of their systems,such as visualizing the traffic light information receivedthrough V2X communication [3], [7], [8], providing colli-sion warning messages [2], [9], or displaying downstreamtraffic-related messages [10], [11]. In this study, however,we design the HMI using AR in a slot reservation for-mat, and then develop the planning and control modulesaccording to that. The HMI is not a natural output ofthis ADAS, but the basis of this ADAS that facilitatesthe human-centered human-machine cooperation. • Adopt AR to overlay the guidance information on topof the traffic environment from the driver’s field-of-view,providing more intuitive guidances.
Instead of adoptingthe AR concept to show traffic information on a separatedisplay [12], [13], or to show some simple information onthe head-up display [14], we adopt AR to overlay the guid-nce information on top of the traffic environment. ThisHMI provides a more intuitively meaningful indication ofreference connected vehicles’ presence and current status,and better assists the driving maneuver of the driver. • Enable the cooperative maneuvers among connected ve-hicles through AR HMI, and validate the system throughgame engine modeling and HITL simulation.
Most of theexisting automotive AR HMI designs are only for ego ve-hicle’s maneuvers, such as navigation, speed visualization,and driving mode visualization [15], [16]. In this study, weleverage V2X communication to allow connected vehiclesto conduct cooperative maneuvers with the assistance ofAR HMI. Not only do we design the AR HMI togetherwith its associated planning and control modules in thesystem, but also conduct modeling and simulation withUnity game engine, which validates its effectiveness in anunsignalized intersection use case.III. A
UGMENTED R EALITY FOR A DVANCED D RIVER -A SSISTANCE S YSTEM
A. Use Case of Unsignalized Intersections
By design, an intersection is a planned location where vehi-cles traveling from different directions may come into conflict,and its functional area extends upstream and downstream fromthe physical area of the crossing streets. Traffic signals havebeen playing a crucial role in achieving safer performanceat intersections, which can reduce the severity of crashes ifoperated properly [17]. However, the addition of unnecessary orinappropriately-designed signals have adverse effects on trafficsafety and mobility. In addition, the dual objectives of safetyand mobility introduce trade-offs in many cases.Therefore, the designs of unsignalized intersections emergeduring recent years, which take advantage of the connected ve-hicle technology. Specifically, approaching vehicles can be as-signed specific sequences by the proposed planning/schedulingalgorithms through V2X communication, and their motionswill be controlled by automated controllers or drivers withguidance information. Most existing works in this use caseassume automation of connected vehicles [18]–[21], namely allvehicles in the system are CAVs. However, in this study, weaim to develop an AR-based ADAS that allows human-drivenconnected vehicles to perform the cooperative maneuvers atunsignalized intersections. This enables a more realistic appli-cation in this use case, because not all vehicles will becomeautomated vehicles in the very near future, and there willdefinitely be a transition period of mixed traffic environment(that has both human-driven and automated vehicles).In this case study, since we focus on the effectiveness ofour proposed AR-based ADAS, some reasonable specificationsare made regarding the settings of the use case: 1) The egoconnected vehicle can receive information regarding vehiclescoming from other directions of this intersection, either di-rectly through V2V communication, or indirectly through V2Icommunication or V2C communication; 2) Except for the egovehicle, not all other vehicles are required to be connected ve-hicles. In the case that certain vehicles do not have connectivity,the perception sensors equipped on their surrounding connected vehicles or on the intersection infrastructures can measure theinformation of those unconnected vehicles, and share it withthe ego vehicle; 3) No vulnerable road users (e.g., pedestrians,bicycles, etc.) are considered in this use case.
B. Design of the Augmented Reality Human-Machine Interface
In this study, we propose an AR HMI that guides thedriver to drive the connected vehicle and cross the unsignalizedintersections with other connected vehicles. The informationthat needs to be visualized to the driver through the ARHMI is regarding vehicles coming from other directions ofthe intersections. We propose a slot reservation methodologyto strategically allocate different vehicles with slots upon theirapproaches to the intersection. While the details regarding theslot reservation will be covered in the next subsection underthe planning module, the design of the reserved slots on theAR HMI is introduced here.A simple example of the unsignalized intersection is illus-trated in Fig. 2, where three connected vehicles are approachingthe intersection from three directions. Once an ego vehiclegets assigned a slot, its information will be shared with itsconflicting vehicles, whose paths have conflicting points withthe ego vehicle’s path. The slot reserved by the ego vehiclewill then be shown to the drivers of conflicting vehicles as ared “unavailable slot” through AR HMI, so those drivers cancontrol their vehicles to stay in the green “available slots”. Itneeds to be noted that all slots are not stagnant, which aredynamically updated according to the status changes of theirassociated vehicles.Fig. 2: Illustration of the slot reservation concept for connectedvehicles crossing an unsignalized intersectionThe high-level concept of the AR HMI is illustrated in Fig.2, while an example from the driver’s field-of-view is shownin Fig. 3. The ego vehicle is approaching an unsignalizedintersection, where two unavailable red slots are visualized onthe HMI, denoting there are two conflicting vehicles comingfrom other directions of the intersection. The driver of thisgo vehicle needs to control the vehicle to keep in the greenavailable slots, so it can avoid collision while crossing theintersection.Fig. 3: Driver’s field-of-view of the AR HMIFig. 4: Coordinate transformation of the slot from the worldreferenced frame to AR HMI referenced frameThe AR HMI is displayed on an image plane (e.g., wind-shield) through the projector unit, where a front-view camera isneeded to identify the road geometry, so a slot can be correctlyoverlaid on the road surface from the driver’s field-of-view. Inorder to transform the slot from its global position and size(calculated in our control module) to the AR HMI, we developa coordinate transformation algorithm based on the pinholecamera projection model, which is illustrated in Fig. 4.The extrinsic parameter matrix in this algorithm identifies thetransformation between the world referenced frame and the ARHMI reference frame. It consists of a × rotation matrix R and a × translation vector t . Given a 3D point of the slot inthe world referenced frame p w ( x w , y w , z w ) , its correspondingpoint p a in the AR referenced frame can be calculated as p a = (cid:2) R t (cid:3) × x w y w z w × (1)Then, the intrinsic parameter matrix is applied, which con-tains the parameters of the AR HMI’s projection device, suchas the focal length and lens distortion. Let ( u , v ) be thecoordinates of the principle point of the image plane (i.e., imagecenter), d x and d y be the physical size of pixels, and f be thefocal length, the projected point p i ( u, v ) on the AR HMI imageplane can be calculated as p i = z a d x /f − z a d x u /f z a d y /f − z a d y v /f z a × (cid:0) p a (cid:1) × (2)Therefore, given the position and size of any slot in the worldreferenced frame, we are able to transform the slot to the ARHMI image plane, so it can be properly shown to the driver ofthe vehicle. The calculation of slot’s position and size will bediscussed in the control module of the next subsection. C. Planning and Control of Connected Vehicles
The planning module and the control module of this AR-based ADAS are developed to provide the inputs for theaforementioned AR HMI. As we design the AR HMI in anintuitive manner that visualizes vehicles as slots projected alongthe roadway, we first need our planning module to reserve thoseslots for different crossing vehicles, and then use our controlmodule to adjust the appropriate sizes and speeds of the slotsbased on the vehicles’ real-time information.
1) Planning Module:
As illustrated in Fig. 1, the planningmodule takes as input the current status of the ego vehiclewhile approaching the intersection, and schedules its sequenceof crossing the intersection by querying the slot pool. Once itsslot is reserved, the ego vehicle can connect and cooperate withits reference vehicles that have conflicting paths with itself. Theslot reservation algorithm (
Algorithm 1 ) is developed below forthe planning module of the ADAS.Instead of adopting a first-come-first-served policy [18],which simply assigns a vehicle with a slot when it entersa pre-defined geo-fence, we develop a new slot reservationalgorithm that accounts for various statuses of a vehicle. Theestimated time of arrival (ETA) of a vehicle is considered,which quantifies a specific point in time when a vehicle issupposed to cross the intersection. Due to the limitation ofspace, the calculation of ETA is not covered in this subsection,which can be referred to our previous work [22]. However, anadditional step in our slot reservation algorithm is that, this ETAvalue of a vehicle is further updated by the ETA value of anyimmediate preceding vehicle of this vehicle. This means thetraffic condition is also considered in the correct calculation ofETA, since a following vehicle’s ETA cannot be earlier than itspreceding vehicle’s ETA, where a constant car-following timeheadway t h must be guaranteed as the delay in between.Therefore, the condition that triggers a vehicle’s slot reser-vation request is two-fold: Either its ETA is lower than apredefined time constant t θ , or it enters a pre-defined geo-fence lgorithm 1: Slot reservation for crossing vehicles at anunsignalized intersection
Data:
Ego vehicle i ’s path p i from the current link to the nextlink, i ’s longitudinal position r i , i ’s longitudinal speed v i , i ’s longitudinal acceleration a i , car-following timeheadway t h , reservation-trigger time constant t θ ,reservation-trigger geo-fence distance constant d θ Result:
Ego vehicle i ’s reserved slot s i , i ’s reference vehicles j s Ego vehicle i enters the current link; while i is not assigned s i at the current intersection do Calculate estimated time of arrival (ETA) t i = f ( r i , v i , a i ) ; if There is/will be an immediate preceding vehicle j on thesame lane then Update ETA t i = min ( t i , t j + t h ) ; end Calculate distance to arrival d i based on r i ; if t i < = t θ || d i < = d θ then Query conflicting vehicle j whose path p j ∩ p i ! = ∅ ; Query the slot pool regarding the maximum slot numberof all conflicting vehicles s maxj ; Assign slot number to the ego vehicle s i = s maxj + 1 ; Connect conflicting vehicle j as a reference vehicle of i ; end end while i leaves the current link do Reset the slot number s i = 0 ; Disconnect all reference vehicles; end of this intersection. This prevents some corner cases such as anego vehicle enters the geo-fence earlier, but has a much lowerspeed (i.e., expected to arrive at the intersection later) thanits conflicting vehicle coming from another direction. In thatcase, the conflicting vehicle needs to significantly decelerate tofollow the ego vehicle to cross the intersection. Once Algorithm1 is implemented, such unnecessary speed adjustments will notexsit anymore, and the overall traffic throughput and energyefficiency will be improved at this intersection.
2) Control Module:
Once a connected vehicle is assigned aslot and connected with its reference vehicles, vehicle informa-tion is constantly transmitted among them. The control moduleof the ego vehicle aims to adjust the positions and sizes ofits reference vehicles’ reserved slots, so the slots can be bettervisualized for the AR HMI.First, a target speed of the ego vehicle is calculated, basedon the information received from its leading reference vehicle j , whose reserved slot is right in front of the ego vehicle’s slot(i.e., s j = s i − ). This target speed allows the ego vehicleto follow the movement of its reference vehicle’s slot with thecar-following time headway. A feedforward/feedback controlalgorithm is developed to calculate this target speed (that needsto be executed at the next time step) v i ( t + δt ) . The feedbackconsensus control part is written as follow v i ( t + δt ) = v i ( t )+ (cid:34) − α ij k ij · (cid:20)(cid:16) r i ( t ) − r j (cid:0) t − τ ij ( t ) (cid:1) + v i ( t ) · (cid:0) t h + τ ij ( t ) (cid:1)(cid:17) + γ i · (cid:16) v i ( t ) − v j (cid:0) t − τ ij ( t ) (cid:1)(cid:17)(cid:21)(cid:35) · δt (3) where δt is the length of each time step, v i ( t ) is the cur-rent longitudinal speed of the vehicle, r i ( t ) is the currentlongitudinal position of the vehicle, α ij denotes the valueof adjacency matrix. The time-variant communication delaybetween two vehicles is denoted as τ ij ( t ) , which is assumedas a normal distribution in this study, with a mean value of 40ms and a standard deviation of 0.0259 based on our test results[23]. The control gains k ij and γ i in this feedback controlalgorithm can be either defined as constants, or further tunedby a feedforward control algorithm to guarantee the safety,efficiency, and comfort of this slot-following process. A lookup-table approach is adopted to dynamically calculate these controlgains, based on the initial speeds of two vehicles, as well astheir initial headway. In short, it can be summarized as { k ij , γ i } = f (cid:0) v i (0) , v j (0) , r i (0) − r j (0) (cid:1) (4)where the details can be referred to our previous work [24].This target speed value v i ( t + δt ) is directly fed into theautomated controller of CAVs to control their longitudinalspeed. As for the AR HMI on human-driven connected vehicles,this target speed is the input to calculate the positions and sizesof the slots reserved by reference vehicles, where Algorithm 2 is developed below for the control module of the ADAS.
Algorithm 2:
Slot adjustment for AR HMI visualization
Data:
Ego vehicle i ’s target speed at the next time step v i ( t + δt ) , i ’s reference vehicle j , i ’s longitudinalposition r i , i ’s lateral position x i , j ’s longitudinalposition r j , car-following time headway t h , i ’s path p i from the current link to the next link, j ’s path p j from thecurrent link to the next link, j ’s longitudinal position r j , j ’s length l j , j ’s width w j Result: j ’s reserved slot’s longitudinal position r s j , lateralposition x s j , length l s j , width w s j for Ego vehicle i ’s all reference vehicles j s, j = 1 , , ..., n do Calculate the conflicting point O ij of i ’s path and j ’s path; Calculate the distance difference δ ij from i ’s lane and j ’slane to the conflicting point; Calculate distances to arrival d i and d j based on r i and r j ; while i does not cross the conflicting point O ij do Calculate j ’s reserved slot’s longitudinal position r s j = r i − ( d i − d j ) + δ ij ; Set j ’s reserved slot’s lateral position x s j = x i ; Calculate j ’s reserved slot’s width w s j = w j ; Calculate j ’s reserved slot’s length l s j = max ( l j , v i ( t + δt ) ∗ t h ) ; end while i crosses the conflicting point O ij do Reset the slot information r s j , w s j , l s j ; end end IV. G
AME E NGINE M ODELING AND E VALUATION
A. Game Engine for Modeling Connected Vehicles
Game engines enable the design of video games for softwaredevelopers, which typically consist of a rendering engine for 2-D or 3-D graphics, a physics engine for collision detection andresponse, and a scene graph for the management of multipleelements (e.g., models, sound, scripting, threading, etc.). Alongith the rapid development of game engines in recent years,their functions have been broadened to a wider scope: datavisualization, training, medical, and military use. Game enginesalso become popular options in the development of advancedvehicular technology [25], which have been used to study driverbehaviors [14], prototype connected vehicle systems [26], [27],and simulate autonomous driving [28], [29].In this study, we adopt Unity game engine to conductmodeling and evaluation of our AR-based ADAS, given itsadvantages of graphics design and visualization, as well as itseasiness to connect with external driving simulators [30]. Asshown in Fig. 5, the map built by LGSVL is adopted in ourstudy, which is based on the South of Market (SoMa) district inSan Francisco [29]. Shown as yellow lines on the road surface,centimeter-level routes along the 2nd Street, Harrison Street,Folsom Street, Howard Street, and Mission Street are furthermodeled in this study, so map matching and path planningfeatures can be enabled in our ADAS. The planning and controlmodules we develop in this study are modeled on vehicles inthis environment through Unity’s C
B. Human-in-the-Loop Simulation
To evaluate our proposed AR-based ADAS, we conductHITL simulation with drivers controlling the external drivingsimulator. As shown in Fig. 6 The driving simulator platform isbuilt with a desktop (processor Intel Core i7-9750 @2.60GHz,memory 32.0 GB), a Logitech G29 Driving Force racing wheel,and Unity 2019.2.11f1. Fig. 6: Driving simulator platform to conduct human-in-the-loop simulationThe invited participants in this simulation are advised to drivethe ego vehicle in the Unity environment, which travels alongthe 2nd Street. The ego vehicle starts from the Bryant Streetwith zero speed, and then crosses four consecutive unsignalizedintersections from south to north. All other vehicles in thesimulation are non-player characters (NPCs), which run theproposed planning and control modules as CAVs.Additionally, all participants drive the ego vehicle in thebaseline scenario, where all four intersections have traditionalfixed-timing traffic signals. NPC vehicles are randomly gener-ated from all directions at each intersection, and they are notenabled with any CAV feature in this scenario. This enables usto investigate the benefits brought by the proposed AR-basedADAS to the existing transportation systems.
C. Simulation Results and Evaluation
Among all the trips conducted by the participants, onesample simulation result is shown in Fig. 7. This sampleresult was generated when the participant drove the ego vehiclethrough unsignalized intersections, under the guidance of ourproposed AR-based ADAS. Specifically, the first segment ofthe whole trip is picked out, where the ego vehicle crossed thefirst intersection (i.e., 2nd Street & Harrison Street intersection)in collaboration with all other six NPC vehicles.The distance-time plot in Fig. 7(a) shows the ego vehicle wasable to keep a relatively safe distance regarding its referencevehicles, including its immediate preceding NPC vehicle 3.Meanwhile, the ego vehicle also acts as a reference vehicle forNPC vehicle 4, 5 and 6, where NPC vehicle 4 considers the egovehicle as its immediate preceding vehicle. Since the proposedfeedforward/feedback control algorithm was applied to thesethree NPC vehicles, they consecutively decelerated during 5-12 s to maintain a relatively safe distance with their immediatepreceding vehicles.The process of all seven vehicles reserving slots is shown inFig. 7(b), which corresponds to the vehicle trajectories in Fig.7(a). Once a vehicle was assigned a slot by our proposed
Al-gorithm 1 , they immediately identified their reference vehiclesand apply the control algorithm. Once they crossed the currentntersection, the reserved slots were reset to zero, waiting fornew assignments while approaching the next intersection. It canbe noticed from this plot that, the ego vehicle (i.e., dark-reddashed line) was assigned a new slot of three after it enteredthe next link, even before NPC vehicle 5 and 6 crossed the firstintersection. This means the proposed slot reservation process isindependent at each intersection, which continues running whendifferent vehicles approaching and leaving the intersection. (a)(b)
Fig. 7: A sample simulation result of applying AR-based ADASat the 2nd St & Harrison St intersection, where the ego vehicleis driven by a human driver on the driving simulation platformA sample comparison between the unsignalized intersectionscenario and the baseline traditional intersection scenario isshown in Fig. 8. This speed-distance plot was created whenthe same participant drove the ego vehicle through all fourintersections in Fig. 5. In the baseline scenario, the ego vehicleran into red lights at the first and the fourth intersections,while directly passed the second and the third intersectionsduring green lights. In the unsignalized intersection scenario,however, the ego vehicle maintained a relatively stable speedwhile travelling through all four intersections, without any fullstop at any intersection. Although a higher maximum speedwas reached in the baseline scenario, the excessive speedchanges significantly increased the travel time and the energyconsumption. Based on all trips conducted by the participantsin HITL simulation, an average of 20% reduction in travel Fig. 8: A sample comparison when the ego vehicle travelsthrough the whole corridor 1) with traditional intersections and2) with unsignalized intersections using the proposed ADAStime, and an average of 23.7% reduction in fuel consumption(calculated by the open-source MOVESTAR model [31] andassuming all gasoline vehicles) can be achieved by applyingthe proposed AR-based ADAS.V. C
ONCLUSION AND F UTURE W ORK
In this study, an AR-based ADAS was designed for con-nected vehicles, which visualizes the guidance information tovehicle drivers in a more intuitive manner. Instead of makingthe HMI a simplified output of whatever is provided by othermodules of the ADAS, we designed the planning and controlmodules of our ADAS to better serve the HMI, making thissystem human-centered. A slot reservation methodology wasproposed in the unsignalized intersection use case, where adriver can cooperate with other crossing vehicles at intersec-tions by simply following the guidance on the AR HMI. Mod-eling and evaluation of this AR-based ADAS were conductedin Unity game engine, where HITL simulation results provedthe its benefits in travel time and energy consumption.To take this study one step further, vulnerable road userssuch as pedestrians and bicycles need to be considered in themodeling and visualization process. The term “mixed trafficenvironment” does not only refer to an environment mixed withdifferent kinds of vehicles, but also includes vulnerable roadusers that have the highest priority in the environment. Howto build the ADAS of connected vehicles that could cooperatewith vehicles, bicycles and pedestrians at the same time remainsan interesting question to be solved.A
CKNOWLEDGMENT
The contents of this paper only reflect the views of theauthors, who are responsible for the facts and the accuracyof the data presented herein. The contents do not necessarilyreflect the official views of Toyota Motor North America.R
EFERENCES[1] Z. Wang, Y. Bian, S. E. Shladover, G. Wu, S. E. Li, and M. J. Barth,“A survey on cooperative longitudinal motion control of multiple con-nected and automated vehicles,”
IEEE Intelligent Transportation SystemsMagazine , vol. 12, no. 1, pp. 4–24, 2020.2] C. Olaverri-Monreal and T. Jizba, “Human factors in the design ofhuman–machine interaction: An overview emphasizing v2x communica-tion,”
IEEE Transactions on Intelligent Vehicles , vol. 1, no. 4, pp. 302–313, 2016.[3] Z. Wang, Y. Hsu, A. Vu, F. Caballero, P. Hao, G. Wu, K. Boriboonsomsin,M. J. Barth, A. Kailas, P. Amar, E. Garmon, and S. Tanugula, “Earlyfindings from field trials of heavy-duty truck connected eco-drivingsystem,” in , 2019, pp. 3037–3042.[4] J. Campbell, J. Brown, J. Graving, C. Richard, M. Lichty, T. Sanquist, andJ. Morgan, “Human factors design guidance for driver-vehicle interfaces,”U.S. Department of Transportation National Highway Traffic SafetyAdministration, Tech. Rep., 2016.[5] J. B. Kenney, “Dedicated short-range communications (dsrc) standards inthe united states,”
Proceedings of the IEEE , vol. 99, no. 7, pp. 1162–1182,July 2011.[6] S. Chen, J. Hu, Y. Shi, Y. Peng, J. Fang, R. Zhao, and L. Zhao, “Vehicle-to-everything (V2X) services supported by LTE-based systems and 5G,”
IEEE Communications Standards Magazine , vol. 1, no. 2, pp. 70–76,2017.[7] O. D. Altan, G. Wu, M. J. Barth, K. Boriboonsomsin, and J. A. Stark,“Glidepath: Eco-friendly automated approach and departure at signalizedintersections,”
IEEE Transactions on Intelligent Vehicles , vol. 2, no. 4,pp. 266–277, Dec 2017.[8] H. Conceic¸˜ao, M. Ferreira, and P. Steenkiste, “Virtual traffic lights in par-tial deployment scenarios,” in , 2013, pp. 988–993.[9] C. Olaverri-Monreal, P. Gomes, R. Fernandes, F. Vieira, and M. Ferreira,“The see-through system: A vanet-enabled assistant for overtaking ma-neuvers,” in , 2010, pp. 123–128.[10] R. L. Bertini, S. Boice, and K. Bogenberger, “Dynamics of variablespeed limit system surrounding bottleneck on german autobahn,”
Transportation Research Record , vol. 1978, no. 1, pp. 149–159, 2006.[Online]. Available: https://doi.org/10.1177/0361198106197800119[11] S. Nyg˚ardhs and G. Helmers, “VMS - Variable Message Signs: Aliterature review,” Infrastructure maintenance, Tech. Rep. 570A, 2007.[12] M. Quinlan, T.-C. Au, J. Zhu, N. Stiurca, and P. Stone, “Bringing simu-lation to life: A mixed reality autonomous intersection,” in
Proceedingsof IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , October 2010.[13] Y. Feng, C. Yu, S. Xu, H. X. Liu, and H. Peng, “An augmentedreality environment for connected and automated vehicle testing andevaluation*,” in , 2018,pp. 1549–1554.[14] Z. Wang, X. Liao, C. Wang, D. Oswald, G. Wu, K. Boriboonsomsin,M. Barth, K. Han, B. Kim, and P. Tiwari, “Driver behavior modelingusing game engine and real vehicle: A learning-based approach,”
IEEETransactions on Intelligent Vehicles
Journal of artificial intelligence research , vol. 31,pp. 591–656, 2008.[19] N. Neuendorf and T. Bruns, “The vehicle platoon controller in thedecentralised, autonomous intersection management of vehicles,” in
Pro-ceedings of the IEEE International Conference on Mechatronics, 2004.ICM ’04. , Jun. 2004, pp. 375–380.[20] Q. Jin, G. Wu, K. Boriboonsomsin, and M. Barth, “Platoon-based multi-agent intersection management for connected vehicle,” in , Oct. 2013, pp. 1462–1467.[21] B. Xu, S. E. Li, Y. Bian, S. Li, X. J. Ban, J. Wang, and K. Li, “Distributedconflict-free cooperation for multiple connected vehicles at unsignalizedintersections,”
Transportation Research Part C: Emerging Technologies ,vol. 93, pp. 322–334, 2018. [22] Z. Wang, G. Wu, and M. Barth, “Distributed consensus-basedcooperative highway on-ramp merging using V2X communications,”in
SAE Technical Paper , Apr. 2018. [Online]. Available: https://doi.org/10.4271/2018-01-1177[23] Z. Wang, X. Liao, X. Zhao, K. Han, P. Tiwari, M. J. Barth, and G. Wu, “Adigital twin paradigm: Vehicle-to-Cloud based advanced driver assistancesystems,” in , May2020, pp. 1–6.[24] Z. Wang, K. Han, B. Kim, G. Wu, and M. J. Barth, “Lookup table-based consensus algorithm for real-time longitudinal motion control ofconnected and automated vehicles,” arXiv:1902.07747v2 , 2019.[25] J. Ma, C. Schwarz, Z. Wang, M. Elli, G. Ros, and Y. Feng, “Newsimulation tools for training and testing automated vehicles,” in
RoadVehicle Automation 7 , G. Meyer and S. Beiker, Eds. Cham: SpringerInternational Publishing, 2020, pp. 111–119.[26] Z. Wang, G. Wu, K. Boriboonsomsin, M. Barth et al. , “Cooperativeramp merging system: Agent-based modeling and simulation using gameengine,”
SAE International Journal of Connected and Automated Vehicles ,vol. 2, no. 2, 2019.[27] Y. Liu, Z. Wang, K. Han, Z. Shou, P. Tiwari, and J. H. L. Hansen,“Sensor fusion of camera and cloud digital twin information for intelligentvehicles,” in
IEEE Intelligent Vehicles Symposium (IV) , Jun. 2020.[28] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA:An open urban driving simulator,” arXiv preprint arXiv:1711.03938 ,2017.[29] G. Rong, B. H. Shin, H. Tabatabaee, Q. Lu, S. Lemke, M. Moˇzeiko,E. Boise, G. Uhm, M. Gerow, S. Mehta et al. , “Lgsvl simulator:A high fidelity simulator for autonomous driving,” arXiv preprintarXiv:2005.03778arXiv preprintarXiv:2005.03778