A Fleet Learning Architecture for Enhanced Behavior Predictions during Challenging External Conditions
Florian Wirthmüller, Marvin Klimke, Julian Schlechtriemen, Jochen Hipp, Manfred Reichert
AA Fleet Learning Architecture for Enhanced Behavior Predictionsduring Challenging External Conditions
Florian Wirthm¨uller , Marvin Klimke , Julian Schlechtriemen , Jochen Hipp and Manfred Reichert
Abstract — Already today, driver assistance systems help tomake daily traffic more comfortable and safer. However, thereare still situations that are quite rare but are hard to handleat the same time. In order to cope with these situations andto bridge the gap towards fully automated driving, it becomesnecessary to not only collect enormous amounts of data butrather the right ones. This data can be used to develop andvalidate the systems through machine learning and simulationpipelines. Along this line this paper presents a fleet learning-based architecture that enables continuous improvements ofsystems predicting the movement of surrounding traffic par-ticipants. Moreover, the presented architecture is applied to atesting vehicle in order to prove the fundamental feasibilityof the system. Finally, it is shown that the system collectsmeaningful data which are helpful to improve the underlyingprediction systems.
I. I
NTRODUCTION
Driver assistance systems are on the rise and help toprevent accidents and to support drivers in various waysmore and more frequently. Thereby, modules predictingfuture motions of surrounding traffic participants constitutea central piece of such system’s intelligence. As shown in[1], it is beneficial to integrate external information suchas knowledge about weather or traffic conditions into theseprediction modules. Therefore, the systems are enabled todeal with rarely occuring and nevertheless challenging con-ditions, resulting in increased system performances as wellas benefits for the drivers in general and especially duringchallenging conditions. As a prerequisite for developing suchcontext-aware motion prediction modules huge amounts ofdata need to be collected. But it is not only about gatheringthe pure amount of data but rather about collecting the right
F. Wirthm¨uller, M. Klimke, J. Schlechtriemen and J. Hipp are withMercedes-Benz AG, B¨oblingen, Germany,E-Mail: { first name.last name } @daimler.comF. Wirthm¨uller and M. Reichert are with the Institute of Databases andInformation Systems (DBIS), Ulm University, Ulm, Germany,E-Mail: { first name.last name } @uni-ulm.deJ. Schlechtriemen is with the Institute of Realtime Learning Systems atthe University of Siegen, Siegen, GermanyORCID (ordered as authors above):https://orcid.org/0000-0002-9732-2561;https://orcid.org/0000-0003-2647-9673;https://orcid.org/0000-0002-9130-061X;https://orcid.org/0000-0002-9037-9899;https://orcid.org/0000-0003-2536-4153c (cid:13) data. This means to collect data during conditions wherecurrent systems face problems and data which facilitatesdevelopers to improve their systems. As at least some ofthese conditions occur rather rarely, it is crucial to collectcorresponding data with a large fleet of vehicles to ensurea good coverage of all kinds of situations. Hence, thiswork presents a data collection architecture enabling contin-uously improving motion predictions over time. Besides, wedemonstrate the fundamental feasibility of the approach byintegrating it into a testing vehicle. Fig. 1 illustrates the ideaof such a fleet learning architecture indicating its potentialfor system improvements.The remainder of this work is structured as follows: Sec. IIgives an overview of related works. Sec. III introducesthe new fleet learning-based architecture concept. The con-cept enables the detection of challenging conditions withrespect to behavior prediction. In particular, it allows re-parametrizing onboard prediction modules in order to achieveimproved prediction performances. Afterwards, Sec. IV andSec. V describe the development of the desired predictionwatchdog and the prototypical realization of the neededonboard components in a testing vehicle. Sec. VI summarizesand concludes the article.II. R ELATED W ORK
For the present study, works dealing with the developmentof systems intended for data collection and management c (cid:13) a r X i v : . [ c s . R O ] S e p Sec. II-A) as well as such dealing with behavior prediction(Sec. II-B) are of particular interest. After introducing andcategorizing characteristic approaches, Sec. II-C discussesthem and deduces the contribution of this article.
A. Data Collection and Management Systems
Works intending to collect, manage and preprocess datato be used during the development, test and simulation ofalgorithms for automated driving applications in general canbe divided into two main groups. The approaches of thefirst group focus on data collection from an external pointof view. Therefore, camera systems or statically positioneddrones looking from a birds-eye view onto the scene are used.Whereas the NGSIM data set [2] has been very popular,most researchers recently switched over to the more exactand larger highD and inD data sets [3], [4].By contrast, the second group relies on dedicated mea-surement vehicles [5]–[7]. Therefore, the data collection isperformed from a moving point of view within the scene.Thus, challenges as occluded vehicles can occur and thedata collection mechanism itself can influence the collectedbehaviors. In exchange, measurement durations for single ve-hicles can be significantly increased compared to approachesin the first group.In addition to the data sets mentioned above, whichrepresent traffic scenes and their objects through numericaldescriptions of the traffic scenes and their objects, severaloptical data sets exist. As examples the popular KITTI[8] and Cityscapes [9] data sets or the upcoming Baidudriving [10] and Audi autonomous driving [11] data setscan be mentioned. While such optical data sets are moresuitable, for example, for the development of object detectionand semantic segmentation algorithms, numerical ones aresignificantly more useful for object motion predictions.Additionally, this work focuses on the development of afleet learning architecture. Consequently, it is desirable tominimize the size of the data to be transmitted. Accord-ingly, optical data sets are only of secondary interest forthe presented article, as images or even videos are muchlarger compared to sparse numerical object representations.This also applies to works focusing on the collection ofpedestrian motion data, such as the popular UCY, ETH, andstanford drone data sets [12]–[14], as our research focusesthe prediction of vehicle motions.
B. Behavior Prediction Approaches
According to Lef`evre [15], behavior prediction ap-proaches can be divided into the three categories: physics-based, maneuver-based, and interaction-aware prediction ap-proaches.
Physics-based approaches assume that future ve-hicle motions solely depend on the laws of physics and canbe described with simple models such as constant velocityor constant acceleration. A good overview on correspondingapproaches is provided in [16]. By contrast, maneuver-basedapproaches (e. g. [1], [17]–[22]) try to infer the maneu-ver a driver intends to perform. Finally, interaction-awareapproaches [23]–[26] provide the most advanced motion models by predicting the motions of all vehicles in a givensituation simultaneously. In particular, these models considerthat all vehicles mutually influence each other.[18] uses a categorization, which is more oriented towardsthe representation of the prediction output and also allowscategorizing approaches that cannot be uniquely assignedto one of the aforementioned classes. This categorizationdistinguishes between approaches for maneuver prediction,position prediction, and hybrid approaches. While maneuverprediction approaches (e. g. [20], [24]) try to infer whichone of a fixed set of maneuvers a vehicle will perform,position prediction approaches (e. g. [21], [23], [25]–[28]) tryto infer at which exact position a vehicle will be at a certainfuture point in time, i. e. the latter approaches operate in acontinuous space. Finally, hybrid approaches (e. g. [1], [17]–[19], [22]) integrate the outputs of maneuver and positionprediction approaches into a single or combined model.
C. Contribution
As the literature overview has revealed, a lot of researchhas been spent on data collection for automated driving aswell as on motion prediction. However, the presented workspresume a setting, where a data set is collected once andafterwards utilized to train and validate prediction models.This procedure, obviously limits the variance as well as thesize of the data set. In general, it is not possible to collect adata set covering all relevant corner cases through a singledata collection campaign.As an exception, [29] uses growing hidden markov modelsto learn trajectory prediction models for pedestrians andvehicles at a fixed location. Essentially, the drivable spaceis represented as a discretized graph with edges and nodes,which is updated over runtime. Although this work is in linewith our research direction, it cannot be integrated into amoving vehicle and a therefore changing surrounding.In order to bridge the described research gap, this articlecontributes in three respects:C.1 A fleet learning-based architecture enabling enhancedbehavior predictions, especially during challenging ex-ternal conditions, is presented.C.2 As a key part of the described architecture, a predictionwatchdog is developed.C.3 The necessary modules of the architecture are pro-totypically implemented and integrated into a testingvehicle, demonstrating the fundamental feasibility of theapproach.III. A
RCHITECTURE C ONCEPT
We aim to develop an architecture concept enablingcontinuous performance improvements to motion predictionsystems within a fleet of vehicles. The architecture needs tofulfill the following requirements:R.1 The prediction performance shall be equal over allvehicles of the fleet in any situation.R.2 Data transmission (e. g. via mobile communication) ifnecessary shall be restricted to a minimum in order toreduce communication costs. ensor Fusion Communication Module
Communication
Module
Trajectory PlanningActuators
Situation Prediction ComparisonPrediction Buffer
Parameter
Storage
Parameter
Update Module
Situation
Database SI PE {PE, PP, SI} PP PT {PE T0 , PP T0 , SI T0 }TR PUSIPP In-Vehicle Component (N)
Backend Component (1)
TDPE T0 , PP T0 , SI T0 SI: Sensor InformationPE: Predicted Environment
PT: Planned Trajectory
PP: Prediction Parameters TR: TriggerX T0 : X Buffered TD: Training Data
PU: Parameter UpdateCommunication Channel { P E T , PP T , S I T , S I } A D S t a c k P r e d . W a t c hd o g Fig. 2. Overview of the proposed fleet learning architecture. The colored connections highlight the two main loops within the architecture:blue: condition-adaptive parameter request loop; green: data collection and parameter update loop. As indicated by the 1-to-N relation, there are a lot ofvehicles with the described in-vehicle component communicating with a single backend component.
R.3 The prediction module shall produce reliable resultseven if the system is offline (i. .e. if there is no mobilecommunication signal).R.4 The overall prediction performance shall increase overlifetime.R.5 All updates to be deployed to the vehicle fleet need togo through a release process.The developed architecture meeting these requirementsis depicted in Fig. 2. From a high-level perspective, thearchitecture comprises of a communication channel as wellas an in-vehicle and a backend component. The in-vehiclecomponent, in turn, comprises seven modules: • A sensor fusion module aggregating the raw informa-tion from different sensors and providing a consistentrepresentation of the surrounding to other modules. • A situation prediction module providing the trajectoryplanning module with information about the evolutionof the current traffic situation. The module’s output canbe optimized through remote parametrization. • A trajectory planning module planning trajectories forthe ego-vehicle based on the current sensor informationas well as the situation predictions. Good trajectories arecharacterized by safety and comfort for the passengers.To enable the planning module to generate such trajec-tories, both inputs need to be as accurate as possible inany situation. • Actuators realizing planned trajectories. • A prediction buffer storing the current prediction out-put, the prediction parameters, and the current sensorinformation until reaching the prediction time. • A comparison module comparing the actual positionsof the surrounding vehicles with the predictions made some moments ago. If a predicted position differs toomuch from the actual one, this module triggers thecommunication module to send a new package of train-ing data containing the buffered prediction (output), thebuffered sensor information (input), and the actual po-sition (desired output) to the backend. This mechanismensures that exactly those situations are detected andused to increase the prediction performance, which arecurrently handled sub-optimally during the predictions.This contributes to meet requirement R.2. • A communication module requesting condition-specificprediction parameters from the backend and providingthem to the prediction module. This communicationmodule also transmits data backwards over the com-munication channel.The backend part on the other hand consists of only fourmodules: • A communication module receiving data from the ve-hicle fleet and transmitting prediction parameters back-wards. • A condition-adaptive parameter storage, holding theparameters currently used during different situations.Due to the use of a single shared parameter storagefor all vehicles of the fleet, requirement R.1 is met. • A situation database storing all data that are necessary to(re-)train a machine learning-based prediction module.In detail: – All inputs necessary for the prediction module. – The desired output of the prediction module. – The external conditions of the measurement (e. g.weather or speedlimit).
A parameter update module using the measurementsstored in the situation database to calculate improvedparameter values. Before pushing an update to theparameter storage, it is checked whether it increases theperformance in all known situations. Then the updatedparameters are released resulting in the fulfillment ofrequirements R.4 and R.5.As further shown in Fig. 2, essentially there are twodata loops. The loop emphasized in green collects data thatshall enable determining improved prediction parameters inthe backend. Within the blue-colored loop, vehicles requestcondition-adaptive prediction parameters and use them toensure reliable predictions during all situations. To ensurethat the prediction also works in scenarios in which no com-munication with the backend is possible, the system may fallback to a basic parameter set (fulfilling requirement R.3). Thelatter is initially used as well. To bridge short offline-phases,it is advisable that the vehicles request parameters alreadyin advance if possible. This becomes possible, for example,if the route ahead is known or if it is foreseeable that it willstart to rain soon. From the viewpoint of functional safety, itmight also be advantageous to rely on a fixed neural networkarchitecture and to solely adjust the weights during parameterupdates. This though is also transferable to other predictiontechniques.
Communication
Module
Backend
Communication
Module
Vehicle
Communication Channel
Request Condition-
Adaptive Prediction
Parameters Respond with Prediction Parameters
New Trainingsdata
Fig. 3. Overview of the communication channel and the three transmittedmessage types.
Fig. 3 shows the communication channel and the in-vehicle- and backend-sided communication modules in moredetail. Basically, there are three types of messages to betransmitted: • New measurements collected by a vehicle that need tobe added to the situation database. • A request for a parameter set fitting the external condi-tions a vehicle is faced with. • A reply to a parameter request.IV. P
REDICTION W ATCHDOG
The prediction watchdog as depicted in Fig. 2 consistsof the prediction buffer and the comparison module. Asintroduced briefly, it is able to memorize predicted positionsas well as the input features that led to the respectivemodel output. Moreover, it compares the actual predictionoutputs with the desired one, and triggers the transmissionof additional data to the backend. Sec. IV-A provides furtherdetails on the concept of buffering predictions, whereas
Fig. 4. Illustration of the memory module update using ego-relativecoordinates. The striped green vehicle depicts the predicted position of thegreen vehicle. The vehicle shown in blue is the ego-vehicle.
Sec. IV-B outlines the working principle of the comparisonmodule and the triggering.
A. Prediction Buffer
The prediction buffer’s function is to hold positions [ˆ x t h , ˆ y t h ] T predicted at the current point in time t = t untilreaching the prediction horizon h at t = t + t h . In case of anideal prediction for any given point in time, the memorizedpoint lies on the continuously updated trajectory until thevehicle arrives at that position.Due a lacking world-fixed coordinate frame, the predictedposition has to be fixed to the current environment bychanging its vehicle-relative coordinates. The prediction ismemorized in lane or also called Frenet coordinates [30],which enable a robust and reasonably precise way of updat-ing the numeric values using Euler integration. The velocityof the ego-vehicle is measured and split into longitudinal andlateral components given the current driving lane, denoted by (cid:126)v = [ v x , v y ] T . For each memory entry, there is a countdownvariable t c that is initialized with the prediction horizon t h when saving a new prediction. The memorized prediction isupdated according to Eq. 1 and Eq. 2: (cid:20) ˆ x t h ˆ y t h (cid:21) ← (cid:20) ˆ x t h ˆ y t h (cid:21) − (cid:20) v x v y (cid:21) · ∆ t (1) t c ← t c − ∆ t (2) ∆ t corresponds to the time passed since the last modelupdate. Fig. 4 illustrates the process of updating the model.Simply put, the predicted position declared relatively tothe ego-vehicle performing the prediction is adjusted with the ig. 5. Image showing the prediction and logging approach integrated intoa testing vehicle and connected to an AR visualization. ego-vehicle’s movement in each step. As soon as t c reacheszero, the memorized prediction is not updated anymore andthe comparison with the current position can be carriedout. Due to the limited model update frequency, the exactmoment when the countdown vanishes cannot be captured.A significantly large movement of the vehicle can be causeduntil the assessment is issued, as the longitudinal velocitycan be reasonably high. To deal with that effect a constant-velocity correction in longitudinal direction is performed.This prevents the slack due to finite update frequency to havean effect on the accuracy evaluation. A perfect predictionwould feature a residual position of zero in the exact momentthe countdown reaches zero. B. Comparison
The working principle of the comparison module is rathersimple. In order to send the appropriate data to enhance theprediction modules in the backend, it is advisable to selectthose predictions that are too far away from the desiredoutput, i. e. the actual position [ x t h , y t h ] T . Therefore, thecomparison module calculates the longitudinal e x,t h andlateral prediction errors e y,t h according to Eq. 3 and Eq. 4: e x,t h = | x t h − ˆ x t h | (3) e y,t h = | y t h − ˆ y t h | (4)A a new data package is sent to the backend whenexceeding one of the given thresholds Θ x and Θ y for thetwo directions.V. A PPLICATION IN A T ESTING V EHICLE
In order to demonstrate the fundamental feasibility andbenefits of the presented approach, we implemented thedescribed modules in a testing vehicle. To do so, we reliedon the findings we published in [18] with regard to theprediction component as well as on the prediction watchdogpresented in Sec. IV. Initially, we are restricting ourselvesto the prediction of the behavior of the ego-vehicle, as this setting is easier and faster to realize. In general, the samemechanisms can be transfered to surrounding vehicles aswell. Our investigations focus on the fundamental feasibilityof the data collection loop as well as the quality of thecollected data in order to enable model improvements. Thedesign of the communication channel and training adaptedprediction models are out of scope of this work. Instead,the data, which would be transferred from the vehicle tothe backend in the final application is simply logged inCSV files. This allows for a downstreamed inspection byexamining e. g. the histograms of the residuals.The testing vehicle is equipped with a series-like sensorsetup consisting of automotive radars and cameras facingthe front and the back of the vehicle. Moreover, the testingvehicle is equipped with an additional computing unit, wherea ROS environment [31] is deployed. Within the ROS envi-ronment, the sensor signals published over the Flexray busare accessible, allowing for the easy implementation of newfunctional blocks. The ROS environment is connected to anintegrated augmented reality display that allows visualizingthe prediction outputs in real time. Fig. 5 shows an exampleof the AR visualization of the predictions. A video showingthe output for a short sequence can be found on Reserach-Gate . The visualization solution allows for additional visualinspection of the system performance.In our experiments, the denoted system frequency is 25 Hzand the threshold for triggering an erroneous lateral predic-tion logging is set to Θ y = 0 . m . Therefore, the loggeddata have to show up significantly higher prediction errorsthan the system shows during normal operation. Accordingto [18], the median lateral error should be around 0.11 mfor a prediction horizon of 3 s. These values cannot bereached in the given experiment, as the prediction modelscould probably be subject to transfer errors. This is becausea different, even though similar, vehicle and sensor setupwere used compared to the original work. However, as anapproximate estimate the values obtained are sufficient.Fig. 6 depicts the distributions of the logged samples overthe longitudinal a x (upper part) and lateral a y acceleration(lower part). This shows the samples collected during severalhighway test drives with an overall measuring period of morethan one hour. According to the visualization, the employedposition prediction approach seems to produce more faultypredictions when negative longitudinal or positive lateralaccelerations occur. Even this small example with a verylimited amount of collected data shows that the presentedcollection strategy has great potential to enhance trainingdata sets for motion predictions with meaningful samples.Although this example does not refer to external conditions,the same effects could be observed for such, when collectingmore data e. g. through a fleet of several vehicles. For the sake of simplicity we restricted ourselves to solely investigatethe lateral direction here. -1.2 -1.0 -0.5 0.0 0.5 1.0 > 1.2 a x [ ms ]0100200 L o gg e d S a m p l e s < -1.2 -1.0 -0.5 0.0 0.5 1.0 > 1.2 a y [ ms ]0100200 L o gg e d S a m p l e s Fig. 6. Histograms showing the distribution of the logged data’s longitu-dinal and lateral acceleration.
VI. S
UMMARY AND O UTLOOK
We presented a new fleet learning-based data collectionarchitecture that ensures continuous improvements of motionpredictions of surrounding traffic participants. Especially,this is beneficial during challenging external conditions. Inaddition, the relevant elements of the pipeline were proto-typically applied to a testing vehicle. Empirical evaluationsconducted with the testing vehicle prove the fundamentalfeasibility of the system. Besides, the investigations showthat meaningful samples, which can be used to improve themotion predictions, can be collected.As the next steps of our research, we will expand thememory component and conduct investigations based onvehicles other than the ego-vehicle. Additionally, we plana rollout of the data collection architecture on a larger fleetof testing vehicles. Furthermore, we will study the actual im-provements of the motion prediction modules being enabledby the collected data with respect to external conditions, assoon as a larger data basis becomes available.R
EFERENCES[1] F. Wirthm¨uller, J. Schlechtriemen, J. Hipp, and M. Reichert,“Towards incorporating contextual knowledge into the predictionof driving behavior,” in , IEEE, 2020. available:https://arxiv.org/abs/2006.08470.[2] J. Colyar and J. Halkias, “US highway 101 dataset,”
Federal HighwayAdministration (FHWA), Tech. Rep. FHWA-HRT-07-030 , 2007.[3] R. Krajewski, J. Bock, L. Kloeker, and L. Eckstein, “The highDdataset: A drone dataset of naturalistic vehicle trajectories on germanhighways for validation of highly automated driving systems,” in , pp. 2118–2125, IEEE, 2018.[4] J. Bock, R. Krajewski, T. Moers, S. Runde, L. Vater, and L. Eckstein,“The inD dataset: A drone dataset of naturalistic road user trajectoriesat german intersections,” arXiv preprint arXiv:1911.07602 , 2019.[5] V. Ramanishka, Y.-T. Chen, T. Misu, and K. Saenko, “Toward drivingscene understanding: A dataset for learning driver behavior andcausal reasoning,” in
Conference on Computer Vision and PatternRecognition (CVPR) , pp. 7699–7707, IEEE, 2018.[6] L. Klitzke, C. Koch, A. Haja, and F. K¨oster, “Real-world test drivevehicle data management system for validation of automated drivingsystems,” in , pp. 171–180, INSTICC.SciTePress, 2019.[7] M.-F. Chang, J. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett,D. Wang, P. Carr, S. Lucey, D. Ramanan, et al. , “Argoverse: 3Dtracking and forecasting with rich maps,” in
Conference on ComputerVision and Pattern Recognition (CVPR) , pp. 8748–8757, IEEE, 2019.[8] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics:The KITTI dataset,”
The International Journal of Robotics Research(IJRR) , vol. 32, no. 11, pp. 1231–1237, 2013.[9] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Be-nenson, U. Franke, S. Roth, and B. Schiele, “The Cityscapes datasetfor semantic urban scene understanding,” in
Conference on ComputerVision and Pattern Recognition (CVPR) , pp. 3213–3223, 2016.[10] H. Yu, S. Yang, W. Gu, and S. Zhang, “Baidu driving dataset and end-to-end reactive control model,” in , pp. 341–346, IEEE, 2017.[11] J. Geyer, Y. Kassahun, M. Mahmudi, X. Ricou, R. Durgesh, A. S.Chung, L. Hauswald, V. H. Pham, M. M¨uhlegg, S. Dorn, et al. , “A2D2:Audi autonomous driving dataset,” arXiv preprint arXiv:2004.06320 ,2020.[12] A. Lerner, Y. Chrysanthou, and D. Lischinski, “Crowds by example,”in
Computer Graphics forum , vol. 26, pp. 655–664, Wiley OnlineLibrary, 2007.[13] S. Pellegrini, A. Ess, K. Schindler, and L. Van Gool, “You’ll neverwalk alone: Modeling social behavior for multi-target tracking,” in , pp. 261–268, IEEE, 2009.[14] A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese, “Learningsocial etiquette: Human trajectory understanding in crowded scenes,”in
European Conference on Computer Vision (ECCV) , pp. 549–565,Springer, 2016.[15] S. Lef`evre, D. Vasquez, and C. Laugier, “A survey on motion pre-diction and risk assessment for intelligent vehicles,”
ROBOMECHjournal , vol. 1, no. 1, p. 1. Nature Publishing Group, 2014.[16] R. Schubert, E. Richter, and G. Wanielik, “Comparison and evaluationof advanced motion models for vehicle tracking,” in , pp. 1–6, IEEE, 2008.[17] J. Schlechtriemen, F. Wirthmueller, A. Wedel, G. Breuel, and K.-D.Kuhnert, “When will it change the lane? A probabilistic regressionapproach for rarely occurring events,” in , pp. 1373–1379, IEEE, 2015.[18] F. Wirthm¨uller, J. Schlechtriemen, J. Hipp, and M. Reichert, “Teachingvehicles to anticipate: A systematic study on probabilistic behaviorprediction using large data sets,”
IEEE Transactions on IntelligentTransportation Systems (T-ITS) . IEEE, 2020.[19] C. Wissing, T. Nattermann, K.-H. Glander, and T. Bertram, “Trajectoryprediction for safety critical maneuvers in automated highway driving,”in , pp. 131–136, IEEE, 2018.[20] J. Schlechtriemen, A. Wedel, J. Hillenbrand, G. Breuel, and K.-D.Kuhnert, “A lane change detection approach using feature rankingwith maximized predictive power,” in , pp. 108–114, IEEE, 2014.[21] H. Cui, V. Radosavljevic, F.-C. Chou, T.-H. Lin, T. Nguyen, T.-K.Huang, J. Schneider, and N. Djuric, “Multimodal trajectory predic-tions for autonomous driving using deep convolutional networks,” in ,pp. 2090–2096, IEEE, 2019.[22] A. Benterki, M. Boukhnifer, V. Judalet, and C. Maaoui, “Artificialintelligence for vehicle behavior anticipation: Hybrid approach basedon maneuver classification and trajectory prediction,”
IEEE Access ,vol. 8, pp. 56992–57002. IEEE, 2020.[23] D. Lenz, F. Diehl, M. T. Le, and A. Knoll, “Deep neural networks formarkovian interactive scene prediction in highway scenarios,” in , pp. 685–692, IEEE, 2017.[24] M. Bahram, C. Hubmann, A. Lawitzky, M. Aeberhard, andD. Wollherr, “A combined model-and learning-based framework forinteraction-aware maneuver prediction,”
Transactions on IntelligentTransportation Systems (T-ITS) , vol. 17, no. 6, pp. 1538–1550. IEEE,2016.[25] T. Zhao, Y. Xu, M. Monfort, W. Choi, C. Baker, Y. Zhao, Y. Wang,and Y. N. Wu, “Multi-agent tensor fusion for contextual trajectory pre-diction,” in
Conference on Computer Vision and Pattern Recognition(CVPR) , pp. 12126–12134, IEEE, 2019.26] M. Khakzar, A. Rakotonirainy, A. Bond, and S. G. Dehkordi, “A duallearning model for vehicle trajectory prediction,”
IEEE Access , vol. 8,pp. 21897–21908. IEEE, 2020.[27] L. Fang, Q. Jiang, J. Shi, and B. Zhou, “Tpnet: Trajectory proposalnetwork for motion prediction,” arXiv preprint arXiv:2004.12255 ,2020.[28] F. Altch´e and A. De La Fortelle, “An LSTM network for highwaytrajectory prediction,” in , pp. 353–359, IEEE, 2017.[29] D. Vasquez, T. Fraichard, and C. Laugier, “Growing hidden markovmodels: An incremental tool for learning and predicting human andvehicle motion,”
The International Journal of Robotics Research(IJRR) , vol. 28, no. 11-12, pp. 1486–1506, 2009. [30] A. Thorvaldsson and V. Bandi,
Reference path estimation for lateralvehicle control . Master thesis, Chalmers University of Technology,2015.[31] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs,R. Wheeler, and A. Y. Ng, “ROS: an open-source robot operatingsystem,” in