A Survey on Simulators for Testing Self-Driving Cars
AA Survey on Simulators for Testing Self-Driving Cars
Prabhjot Kaur, Samira Taghavi, Zhaofeng Tian, and Weisong ShiDepartment of Computer Science, Wayne State University, Detroit, USA
Abstract
A rigorous and comprehensive testing plays a key role intraining self-driving cars to handle variety of situations thatthey are expected to see on public roads.The physical testing on public roads is unsafe, costly, andnot always reproducible. This is where testing in simulationhelps fill the gap, however, the problem with simulation test-ing is that it is only as good as the simulator used for testingand how representative the simulated scenarios are of the realenvironment. In this paper, we identify key requirements thata good simulator must have. Further, we provide a compari-son of commonly used simulators. Our analysis shows thatCARLA and LGSVL simulators are the current state-of-the-art simulators for end to end testing of self-driving cars forthe reasons mentioned in this paper. Finally, we also presentcurrent challenges that simulation testing continues to face aswe march towards building fully autonomous cars.
According to the annual Autonomous Mileage Report published by the California Department of Motor Vehicles,Waymo has logged billions of miles in testing so far. As of2019, the company’s self-driving cars have driven 20 mil-lion miles on public roads in 25 cities and additionally 15billion miles through computer simulations [43]. The numberof miles driven is important, however, it is the sophistica-tion and diversity of miles accumulated that determines andshapes the maturity of the product [19]. While the real worldtesting through physical driving tests is not replaceable with-out the extra cost required to build the infrastructure and insome cases jeopardizing the safety of public [38], simula-tion plays a key role in supplementing and accelerating thereal world testing [43]. It allows one to test scenarios thatare otherwise highly regulated on public roads because ofsafety concerns [45]. It is reproducible, scalable and cuts thedevelopment. Autonomous Mileage Report: Disengagement Reports
There are many simulators available for testing the soft-ware for self-driving cars, which have their own pros andcons. Some of them include CarCraft and SurfelGAN usedby Waymo [42], [13], Webviz and The Matrix used by Cruise,and DataViz used by Uber [4]. Most of these are proprietarytools, however, there are many open source simulators avail-able as well. In this paper, we compare MATLAB/Simulink,CarSim, PreScan, Gazebo, CARLA and LGSVL simulatorswith the objective of studying their performance in testingnew functionalities such as perception, localization, vehiclecontrol, and creation of dynamic 3D virtual environments.Our contribution is two-fold. We first identify a set of require-ments that an ideal simulator for testing self-driving cars musthave. Secondly, we compare the simulators mentioned aboveand make the following observations.An ideal simulator is the one that is as close to reality aspossible. However, this means it must be highly detailedin terms of 3D virtual environment and very precise withlower level vehicle calculations such as the physics of the car.So, we must find a trade off between the realism of the 3Dscene and the simplification of the vehicular dynamics [15].CARLA and LGSVL meet this trade-off, making them thestate-of-the-art simulators. Further, Gazebo is also a popularrobotic 3D simulator but it is not very efficient in terms of timeinvolved in creating a 3D scene in the simualtion environment.The simulators such as MATLAB/Simulink still play a keyrole because they offer detailed analysis of the results withtheir plotting tools. Similarly, CarSim is highly specialized atvehicle dynamic simulations as it is backed by more precisecar models. The detail reasoning behind these observationsin described in the paper below.This paper is organized into several sections. Section 2provides a summary of evolution of automotive simulatorsfollowed by Section 3 which identifies and describes require-ments for an automotive simulator used for testing self-drivingcars. Then, it provides a survey of several open source simula-tors in Section 4, followed by comparison of these simulatorsin Section 5. Section 6 provides various challenges that thesimulators currently have. Finally this paper concludes in1 a r X i v : . [ c s . R O ] J a n ection 8. The complexity of automotive software and hardware is con-tinuing to grow as we progress towards building self-drivingcars. In addition to tradition testing such as proper vehicledynamics, crash-worthiness, reliability, and functional safety,there is a need to test self-driving related algorithms and soft-ware, such as deep learning and energy efficiency [25]. As anexample, a Volvo vehicle built in 2020 has about 100 millionlines of code according to their data [2]. This includes codefor transmission control, cruise control, collision mitigation,connectivity, engine control and many other basic and ad-vanced functionalities that come with the cars bought today.Similarly, the cars now have more advanced hardware, whichincludes plethora of sensors that ensure vehicles are able toperceive the world around them just like humans do [18].Therefore, the complexity of the modern age vehicle is theresult of both more advanced hardware and software neededto process the information retrieved from the environmentand for decision making capability.Finally, in order to assure that the finished product complieswith the design requirements, it must pass rigorous testingwhich is composed of many layers. It ranges from lower leveltesting of Integrated Circuits (ICs) to higher level testing ofvehicle behavior in general. The testing is accomplished byrelying on both physical and simulation testing. As the com-plexity and functionality of vehicles continue to grow, so doesthe complexity, scale and the scope of testing that becomesnecessary. Therefore, the simulators used for automotivetesting are in a continuous state of evolving.These simulators have evolved from merely simulatingvehicle dynamics to also simulating more complex function-alities. Table 1 shows various levels of automation per the So-ciety of Automotive Engineers (SAE) definitions [35], alongwith the evolving list of requirements for testing that are inher-ent in our path to full automation. It is important to note thatTable 1 focuses on requirements that are essentially new totesting driver assisted features and autonomous behavior [33].This includes things such as perception, localization and map-ping, control algorithms and path planningThus, the simulators intended to be used for testing selfdriving cars must have requirements that extend from sim-ulating physical car models to various sensor models, pathplanning and control. Section 3 dives deeper into these re-quirements.
The emphasis of this paper is on testing the new and highlyautomated functionality that is unique to self-driving cars.This section identifies a set of criteria that can serve as a Table 1: Testing Requirements to meet SAE AutomationLevels.
SAE J3016 Levels of Driving Automation Testing RequirementsLevels Description
Level 0 No Automation: Features are limited to,warnings and momentary assistance.Examples: LDW, Blind Spot Warning Simulation of: Traffic flow,multiple road terrain type,radar and camera sensorsLevel 1 Assisted: Features provide steering ORbrake/acceleration control.Examples: Lane Centering OR ACC All of the above plusSimulation of:vehicle dynamics,ultrasonic sensorsLevel 2 Partial Automation: Features providesteering AND brake/acceleration control.Examples: Lane Centering AND ACCat the same time All of the above plusSimulation of:driver monitoring system.Human-machine interfaceLevel 3 Conditional Automation: Features candrive the vehicle when all of itsconditions are met.Examples: Traffic Jam Assist All of the above plusSimulation of: Trafficinfrastructure,dynamic objectsLevel 4 High Automation: Features can drivethe vehicle under limited conditions.No driver intervention.Examples: Local Driverless taxis All of the above plusSimulation of: differentweather conditions,lidar, camera, radar sensors,mapping and localizationLevel 5 Full Automation: Features can drivethe vehicle in all conditions andeverywhere.Examples: Full autonomous vehicleseverywhere All of the above pluscompliance with all the road,rules, V2X communication LDW = Lane Departure Warning ACC = Adaptive Cruise Control metric to identify which simulators are a best fit for the taskat hand.The approach we take to compile requirements for a simu-lator is as below. Firstly, we focus on the requirements drivenby the functional architecture of self-driving cars [14] (Re-quirements 1-4). Secondly, we focus on the requirements thatmust be met in order to support the infrastructure to drive thesimulated car in (Requirements 5-7). Thirdly, we define therequirements that allow the use of simulator for secondarytasks such as data collection for further use (Requirement 8).Finally, we list generic requirements desired from any goodautomotive simulator (Requirement 9).1.
Perception : One of the vital components of self-drivingcars is its ability to see and make sense (perceive) theworld around itself. This is called perception. The ve-hicle perception is further composed of hardware, thatis available in the form of wide variety of automotivegrade sensors and software, that interprets data collectedby various sensors to make it meaningful for further de-cisions. The sensors that are most prevalent in researchand commercial self-driving cars today include camera,LiDAR, ultrasonic sensor, radar, Global Positioning Sys-tem (GPS), Inertial Measurement Unit (IMU) [41]. Inorder to test a perception system, the simulator musthave realistic sensor models and/or be able to support2n input data stream from the real sensors for furtherutilization. Once the data from these sensors is availablewithin the simulation environment, researchers can thentest their perception methods such as sensor fusion [21].The simulated environment can also be used to guidesensor placement in a real vehicle for optimal perception.2.
The multi-view geometry
The Simultaneous Local-ization and Mapping (SLAM) is one of components ofAutonomous Driving (AD) systems that focuses on con-structing the map of unknown environments and trackingthe location of the AD system inside the updated map.In order to support SLAM applications, the simulatorshould provide the intrinsic and extrinsic features ofcameras. In other words, it should provide the cameracalibration. According to this information, the SLAMalgorithm can run the multi-view geometry and esti-mate the camera pose and localize AD system inside theglobal map.3.
Path Planning :The problem of path planning revolvesaround planning a path for a mobile agent so that it isable to move around autonomously without collisionwith its surroundings. The path planning problem forautonomous vehicles piggy backs on the research thathas already been done in the field of mobile robots in thelast decade. This problem is sub-divided into local andglobal planning [16] where the global planner is typicallygenerated based on a static map of the environment andthe local planner is created incrementally based on theimmediate surroundings of the mobile agent. In orderto create these planners, various planning algorithmsplay a key role [20]. To implement such intelligent pathplanning algorithms like A*, D* and RRT algorithms[16], the simulator should at least have a built-in functionto build maps or have interfaces for importing maps fromoutside. In addition, the simulator should have interfacesfor programming customized algorithms.4.
Vehicle Control : The final step after a collision freepath is planned is to execute the predicted trajectory asclosely as possible. This is accomplished via the con-trol inputs such as throttle, brake and steering [14] thatare monitored by closed loop control algorithms [27].The Proportional–integral–derivative (PID) control algo-rithm and Model Predictive Control (MPC) algorithmare commonly seen in research and industries [24]. Toimplement such intelligent control algorithms, the sim-ulator should be capable of building vehicle dynamicmodels and programming the algorithms in mathemati-cal forms.5.
3D Virtual Environment : In order to test various func-tional elements of a car mentioned in the above require-ments, it is equally important to have a realistic 3D vir-tual environment. The perception system relies on pho-togenic view of the scene to sense the virtual world. This3D virtual environment must include both static objects such as buildings, trees, etc. and dynamic objects such asother vehicles, pedestrians, animals, and bicyclists. Fur-thermore, the dynamic objects must behave realisticallyto reflect the true behavior of these dynamic entities inan environment.In order to achieve 3D virtual environment creation, sim-ulators can either rely on game engines or use the HighDefinition (HD) map of a real environment and renderit in a simulation [13]. Similarly, in order to simulatedynamic objects, the vehicle simulators can leverageother domains such as pedestrian models [7] to simulaterealistic pedestrians movement in the scene. Further-more, the 3D virtual environment must support differentterrains and weather conditions that are typical in a realenvironment.It is important to note that the level of detail in a 3Dvirtual environment depends on the simulation approachtaken. Some companies such as Uber and Waymo do notuse highly detailed simulators [13]. Therefore, they donot use simulators to test perception models. However, ifthe goal is to test perception models in simulation, thenthe level of detail is very important.6.
Traffic Infrastructure : In addition to the requirementsfor a 3D virtual environment mentioned above, it is alsoimportant for a simulation to have the support for var-ious traffic aids such as traffic lights, roadway sinage,etc. [23]. This is because these aids help regulate trafficfor the safety of all road users. It is projected that thetraffic infrastructure will evolve to support connectedvehicles in the near future [26]. However, until the con-nected vehicles become a reality, self-driving cars areexpected to comply with the same traffic rules as thehuman drivers.7.
Traffic Scenarios Simulation : The ability to create var-ious traffic scenarios is one of main points that identifieswhether a simulator is valuable or not. This allows theresearchers to not only re-create/play back a real worldscenario but also allows them to test various "what-if"scenarios that cannot be tested in a real environment be-cause of safety concerns. This criteria considers not onlythe variety of traffic agents but also the mechanisms thatthe simulator provides to generate these agents. Differ-ent types of dynamic objects consist of humans, bicy-cles, motorcycles, animals, and vehicles such as buses,trucks, ambulances and motorcycles. In order to gener-ate scenes close to real world scenes, it is important thatsimulator supports significant number of these dynamicagents. In addition, simulator should provide a flexibleAPI that allows users to manage different aspects of sim-ulation which consists of generating traffic agents andmore complex scenarios such as pedestrian behaviors,vehicles crashes, weather conditions, sensor types, stopssigns, and etc.8.
In order to provide the training3ata to the AI models, the simulator should provide ob-ject labels and bounding boxes of the objects appearingin the scene. The sensor outputs each video frame whereobjects are encapsulated in a box.9.
Non-functional Requirements
The qualitative analy-sis of open source simulators includes different aspectsthat can help AD developers to estimate the learningtime and the duration required for simulating differentscenarios and experiments.
Well maintained/Stability
In order to use simulator fordifferent experiments and testing, the simulator shouldhave comprehensive documentation that makes it easyto use. In case that maintenance teams improve the sim-ulator, if the backward compatibility is not considered,the documentation should provide the precise mappingbetween the deprecated APIs and newly added APIs.
Flexibility/Modular
Open source simulators should fol-low division of concept principle that can help AD de-velopers to leverage and extend different scenarios inshorter time. In addition, the simulator can provide aflexible API that enables defining customized versionof sensors, generating new environments and addingdifferent agents.
Portability
If the simulator is able to run on differenttypes of operating systems, it enables users to leveragethe simulator more easily. Most of users may not haveaccess to the different types of operating systems at thesame time, therefore the simulators portability can savethe time for the users.
Scalability via a server multi-client architecture
Scal-able architecture such as client-server architecture en-ables multiple clients run on different nodes to con-trol different agents at the same time. This is helpfulspecifically for simulating the congestion and/or com-plex scenes.
Open-Source
It is preferred that a simulator be opensource. The open source simulators enable more col-laboration, collective progress and allows to incorporatelearning from peers in the same domain.
This section provides a brief description of simulators thatwere analyzed and compared.
MATLAB/Simulink published Automated Driving Tool-box™, which provides various tools that facilitate the design,simulation and testing of Advanced Driver Assisted Systems(ADAS) and automated driving systems. It allows users totest core functionalities such as perception, path planning,and vehicle control. One of its key features is that HEREHD live map data [17] and OpenDRIVE® road networks [31] can be imported into MATLAB and these can be used forvarious design and testing purposes. Further, the users canbuild photo-realistic 3D scenarios and model various sensors.It is also equipped with a built-in visualizer that allows toview live sensor detection and tracks [28].In addition to serving as a simulation and design environ-ment, it also enables users to automate the labeling of objectsthrough the Ground Truth Labeler app [30]. This data canbe further used for training purposes or to evaluate sensorperformance.MATLAB provides several examples on how to simulatevarious ADAS features including Adaptive Cruise Control(ACC), Automatic Energency Braking (AEB), AutomaticParking Asisst, etc [29]. Last but not the least, the toolboxsupports Hardware In the Loop (HIL) testing and C/C++ codegeneration, which enables faster prototyping.
CarSim is a vehicle simulator commonly used by industry andacademia. The newest version of CarSim supports movingobjects and sensors that benefit simulations involving ADASand Autonomous Vehicles (AVs) [9]. In terms of traffic andtarget objects, in addition to simulated vehicle, there are up to200 objects with independent locations and motions. Theseobjects include static objects such as trees and buildings anddynamic objects such as pedestrians, vehicles, animals, andother objects of interest for ADAS scenarios.The dynamic object is defined based on the location andorientation that is important in vehicle simulation. Addition-ally, when the sensors are combined with objects, the objectsare considered as targets that can be detected.The moving objects can be linked to 3D objects with theirown embedded animations, such as walking pedestrians orpedaling bicyclists. If there are ADAS sensors in the simu-lation, each object has a shape that influences the detection.The shapes may be rectangular, circular, a straight segment(with limited visibility, used for signs), or polygonal.In terms of vehicle control, there are several math modelsavailable to use and the users can control the motion usingbuilt-in options, either with CarSim commands, or with exter-nal models (e.g., Simulink).The key feature of CarSim is that it has interfaces to othersoftware such as Matlab and LabVIEW. CarSim offers severalexamples of simulations and it has a detailed documentation,however, it is not an open source simulator.
PreScan provides a simulation framework to design ADASand autonomous driving vehicles. It enables manufacturers totest their intelligent systems by providing a variety of virtualtraffic conditions and realistic environments. This is benefitedby PreScan’s automatic traffic generator. Moreover, PreScan4nables users to build their customized sensor suites, controllogic and collision warning features.PreScan also supports Hardware In the Loop (HIL) simu-lation which is a common practice for evaluating ElectronicControl Unit (ECU). PreScan is good at physics-based calcu-lations of the sensor inputs. The sensor signals are input tothe ECU to evaluate various algorithms. Further, the signalscan also be output to either the loop or camera HIL for driver.It also supports real-time data and GPS vehicle data record-ing, which can then be replayed later on. This is very helpfulfor situations which are otherwise not easy to simulate withsynthetic data.Additionally, PreScan offers a unique feature called VehicleHardware-In-the-Loop (VeHIL) laboratory. It allows users tocreate a combined real and virtual system where the test/egovehicle is placed on a roller bench and other vehicles arerepresented by wheeled robots with a car-like appearance.The test vehicle is equipped with realistic sensors. By usingthis combination of ego vehicle and mobile robots, VeHIL iscapable of providing detailed simulations for ADAS.
CARLA [11] is an open-source simulator that democratizesautonomous driving research area. The simulator is opensource and is developed based on the Unreal Engine [40]. Itserves as a modular and flexible tool equipped with a power-ful API to support training, and validation of ADAS systems.Therefore, CARLA tries to meet the requirements of varioususe cases of ADAS, for instance training the perception al-gorithms or learning driving policies. CARLA is developedfrom scratch based on the Unreal Engine to execute the sim-ulation and it leverages the OpenDRIVE standard to defineroads and urban settings. The CARLA API is customizableby users and provides control over the simulation. It is basedon Python and C++ and is constantly growing concurrentlywith the project that is an ecosystem of projects, built aroundthe main platform by the community.The CARLA consists of a scalable client-server architec-ture. The simulation related tasks are deployed at the serverincluding the updates on the world-state and its actors, sensorrendering ,and computation of physics etc. In order to gener-ate realistic results, the server should run with the dedicatedGPU. The client side consists of some client modules that con-trol the logic of agents appearing in the scene including thepedestrians, vehicles, bicycles, motorcycles. Also, the clientmodules are responsible for the world conditions setups. Thesetup of all client modules is achieved by using the CARLAAPI. The vehicles, buildings and urban layouts are coupleof open digital assets that CARLA provides. In addition, theenvironmental conditions such as different weather conditionsand flexible specification of sensor suits are supported. Inorder to accelerate queries (such as the closest waypoint in aroad), CARLA makes use of RTrees. In recent versions, CARLA has more accurate vehicle vol-umes and more realistic core physics (such as wheel’s friction,suspension, and center of mass). This is helpful when a ve-hicle turns or a collision occurs. In addition, the process ofadding traffic lights and stop signs to the scene have beenchanged from manual to automatic by leveraging the informa-tion provided by the OpenDRIVE file.CARLA proposes a safety assurance module based on theRSS library. The responsibility of this module is to put holdson the vehicle controls based on the sensor information. Inother words, the RSS defines various situations based on sen-sor data and then determines a proper response accordingto safety checks. A situation describes the state of the egovehicle with an element of the environment. Leveraging theOpenDrive signals enables the RSS module to take differ-ent road segments into consideration that helps to check thepriority and safety at junctions.
Gazebo is an open source, scalable, flexible and multi-robot3D simulator [22]. It is supported on multiple operatingsystems, including Linux and Windows. It supports the recre-ation of both indoor and outdoor 3D environments.Gazebo relies on three main libraries, which include,physics, rendering, and a communication library. Firstly,the physics library allows the simulated objects to behave asrealistically as possible to their real counterparts by lettingthe user define their physical properties such as mass, frictioncoefficient, velocity, inertia, etc. Gazebo uses Open DynamicEngine (ODE) as its default physics engine but it also supportsothers such as Bullet, Simbody and Dynamic Open SourcePhysics Engine (DART). Secondly, for visualization, it usesa rendering library called Object-Oriented Graphics Render-ing Engine (OGRE), which makes it possible to visualizedynamic 3D objects and scenes. Thirdly, the communicationlibrary enables communication amongst various elements ofGazebo. Besides these three core libraries, Gazebo offers plu-gin support that allows the users to communicate with theselibraries directly.There are two core elements that define any 3D scene. InGazebo terminology, these are called a world and a model.A world is used to represent a 3D scene, which could be anindoor or an outdoor environment. It is a user defined filein a Simulation Description File (SDF) format [34], with adot world extension. The world file consists of one or manymodels. Further, a model is any 3D object. It could be a staticobject such as a table, house, sensor, or a robot or a dynamicobject. The users are free to create objects from scratch bydefining their visual, inertial and collision properties in anSDF format. Optionally, they can define plugins to controlvarious aspects of simulation, such as, a world plugin controlsthe world properties and model plugin controls the modelproperties and so on. It is important to note that Gazebo has a5ide community support that makes it possible to share anduse models already created by somebody else. Additionally,it has well-maintained documentation and numerous tutorials.Finally, Gazebo is a standalone simulator. However, it istypically used in conjunction with ROS [36], [44]. Gazebosupports modelling of almost all kinds of robots. [8] presentsa complex scenario that shows the advanced modeling capa-bilities of Gazebo which models Prius Hybrid model of a cardriving in the simulated M-city.
LG Electronics America R&D Center (LGSVL) [32] is amulti-robot AV simulator. It proposes an out-of-the-box so-lution for the AV algorithms to test the autonomous vehiclealgorithms. It is integrated to some of the platforms that makeit easy to test and validate the entire system. The simulatoris open source and is developed based on the Unity gameengine [39]. LGSVL provides different bridges for messagepassing between the AD stack and the simulator backbone.The simulator has different components. The user AD stackthat provides the development, test, and verification platformto the AV developers. The simulator supports ROS1, ROS2,and Cyber RT messages. This helps to connect the simulatorto the Autoware [5] and Baidu Apollo [3] which are the mostpopular AD stacks. In addition, multiple AD simulators cancommunicate simultaneously with the simulator via ROS andROS2 bridges for the Autoware and customized bridge forBaidu Apollo. LGSVL Simulator leverages Unity’s gameengine that helps to generate photo-realistic virtual environ-ments based on High Definition Render Pipeline (HDRP) tech-nology. The simulation engine provides different functionsfor simulating the environment simulation (traffic simulationand physical environment simulation), sensor simulation, andvehicle dynamics. The simulator provides a Python API tocontrol different environment entities. In addition, sensor andvehicle models proposes a customizable set of sensors viasetting up a JSON file that enables the specification of theintrinsic and extrinsic parameters. The simulator currentlysupports camera, LiDAR, IMU, GPS, and radar. addition-ally, developers can define customized sensors. The simulatorprovides various options, for instance the segmentation andsemantic segmentation. Also, LGSVL provides FunctionalMockup Interface (FMI) in order to integrate vehicle dynam-ics model platform to the external third party dynamics mod-els. Finally, the weather condition, the day time, traffic agents,and dynamic actors are specified based on 3D environment.One of the important features of LGSVL is exporting HDmaps from 3D environments.
This section provides a comparison of different simulators de-scribed under Section 4, starting with MATLAB, CarSim and PreScan. Then we compare Gazebo and CARLA, followedby comparison of CARLA and LGSVL. Finally, we concludewith our analysis and key observations.MATLAB/Simulink is designed for the simple scenarios.It is good at computation and has efficient plot functions. Thecapability of co-simulation with other software like CarSimmakes it easier to build various vehicle models. It is commonto see users using the vehicle models from CarSim and buildtheir upper control algorithms in MATLAB/Simulink to doa co-simulation project. However, MATLAB/Simulink haslimited ability to realistically visualize the traffic scenarios,obstacles and pedestrian models. PreScan has strong capa-bility to simulate the environment of the real world such asthe weather conditions that MATLAB/Simulink and CarSimcannot do, it also has interfaces with MATLAB/Simulink thatmakes modelling more efficient.Further, Gazebo is known for its high flexibility and itsseamless integration with ROS. While the high flexibility isadvantageous because it gives full control over simulation, itcomes at a cost of time and effort. As opposed to CARLAand LGSVL simulators, the creation of a simulation world inGazebo is a manual process where the user must create 3Dmodels and carefully define their physics and their positionin the simulation world within the XML file. Gazebo does in-clude various sensor models and it allows users to create newsensor models via the plugins. Next, we compare CARLAand LGSVL simulators.Both CARLA and LGSVL provide high quality simulationenvironments that require GPU computing unit in order torun with reasonable performance and frame rate. The usercan invoke different facilities in CARLA and LGSVL viausing a flexible API. Although, the facilities are different be-tween two simulators. For instance, CARLA provides build-in recorder while LGSVL does not provide. Therefore, inorder to record videos in LGSVL, the user can leverage videorecording feature in Nvidia drivers. CARLA and LGSVLprovide a variety of sensors, some of these sensors are com-mon between them such as Depth camera, Lidar, and IMU.In addition, each simulator provides different sensors withdescription provided in their official website. Both simulators,CARLA and LGSVL, enhance users to create the customsensors. The new map generation has different process inCARLA and LGSVL. The backbone of CARLA simulatoris Unreal Engine that generates new maps by automaticallyadding stop signs based on the OpenDRIVE technology. Onthe other hand, the backbone of LGSVL simulator is Unitygame engine and user can generate new map by manually im-porting different components into the Unity game engine. Ad-ditionally, the software architecture in CARLA and LGSVLis quite different. LGSVL mostly connects to AD stacks (Au-toware, Apollo, and etc. ) based on different bridges and mostof the simulators’ facilities publishes or subscribes the dataon specified topics in order to enables AD stacks to consumedata. On the other hand, most of facilities in CARLA are built6n, although it enables users to connects to ROS1,ROS2, andAutoware via using the bridges.While all of the six simulators described in the paper of-fer their own advantages and disadvantages, we make thefollowing key observations.•
Observation 1:
LGSVL and CARLA are most suitedfor end to end testing of unique functionalities that self-driving cars offer such as perception, mapping, local-ization, and vehicle control because of many built-inautomated features they support.•
Observation 2:
Gazebo is a popular robotic simulatorbut the time and effort needed to create dynamic scenesdoes not make it the first choice for testing end to endsystems for self-driving cars•
Observation 3:
MATLAB/Simulink is one of the bestchoices for testing upper level algorithms because of theclearly presented logic blocks in Simulink. Additionally,it has a fast plot function that makes it easier to do theresults analysis.•
Observation 4:
CarSim specializes in vehicle dynamicsimulations because of its complete vehicle library andvariety of vehicle parameters available to tune. However,it has limited ability to build customized upper-levelalgorithms in an efficient way.•
Observation 5:
PreScan has a strong capability ofbuilding realistic environments and simulating differentweather conditions.In Table 2, we provide a comparison summary where allsix simulators described in this paper are further compared.
The automotive simulators have come a long way. Althoughsimulation has now become a cornerstone in the developmentof self-driving cars, common standards to evaluate simulationresults is lacking. For example, the Annual Mileage Reportsubmitted to the California Department of Motor Vehicle bythe key players such as Waymo, Cruise, and Tesla does notinclude the sophistication and diversity of the miles collectedthrough simulation [1]. It would be more beneficial to havesimulation standards that could help make a more informativecomparison between various research efforts.Further, we are not aware of any simulators that are cur-rently capable of testing the concept of connected vehicles,where vehicles communicate with each other and with theinfrastructure. However, there are test beds available such asthe ones mentioned in the report [12] from the US Departmentof Transportation.In addition, current simulators, for instance CARLA andLGSVL, are on-going projects and add the most recent tech-nologies. Therefore, the user may encounter with undocu-mented errors or bugs. Therefore, the community support isquite important which can improve the quality of open sourcesimulators and ADAS tests.
There are many other simulators available that are not ex-plicitly reviewed in this paper. For example, RoadView is atraffic scene modelling simulator built using image sequencesand the road Global Information System (GIS) data [46]. [6]provides an in-depth review of CARLA simulator and howit can be used to test autonomous driving algorithms. Simi-larly, [13], [10], and [15] provide review of various other au-tomotive and robotic simulators. [37] discusses a distributedsimulation platform for testing.
In this paper, we compare MATLAB/Simulink, CarSim, PreS-can, Gazebo, CARLA and LGSVL simulators for testingself-driving cars. The focus is on how well they are at simu-lating and testing perception, mapping and localization, pathplanning and vehicle control for the self-driving cars. Ouranalysis yields five key observations that are discussed in Sec-tion 5. We also identify key requirements that state-of-the-artsimulators must have to yield reliable results. Finally, severalchallenges still remain with the simulation strategies suchas the lack of common standards as mentioned in Section 6.In conclusion, simulation will continue to help design self-driving vehicles in a safe, cost effective, and timely fashion,provided the simulations represent the reality.7able 2: Comparison of various simulators.
Requirements MATLAB(Simulink) CarSim PreScan CARLA Gazebo LGSVLPerception :Sensor modelssupported Y Y Y Y(1) Y(2) Y(3)
Perception : sup-port for differentweather conditions N N Y Y N Y
Camera Calibra-tion
Y N Y Y N N
Path Planning
Y Y Y Y Y Y
Vehicle Control :Support for propervehicle dynamics Y Y Y Y Y Y(3)
3D Virtual Envi-ronment
U Y Y Y, Outdoor(Urban) Y, Indoorand Outdoor Y, Outdoor(Urban)
Traffic Infras-tructure
Y, allows tobuild lightsmodel Y Y Y, Trafficlights, In-tersections,Stop signs,lanes Y, allowsto manuallybuild allkinds ofmodels Y
Traffic Scenariosimulation : Sup-port of differenttypes of Dynamicobjects Y Y Y Y N(2) Y
Y N N Y U Y
Interfaces toother software
Y, with Car-sim, Pres-can,ROS Y, with Mat-lab(Simulink) Y, withMAT-LAB(Simulink)Y, with ROS,Autoware Y, with ROS Y, withAutoware,Apollo, ROS
Scalability via aserver multi-clientarchitecture U U U Y Y Y
Open Source
N N N Y Y Y
Well-maintained/Stable
Y Y Y Y Y Y
Portability
Y Y Y Y, Windows,Linux Y, Windows,Linux Y, Windows,Linux
Flexible API
Y Y U Y (2) Y YY=supported, N = not supported, U = unknown(1) See section 4.4 for details about Carla.(2) See section 4.5 for details about Gazebo(3) See section 4.6 for details about LGSVL
References [1] New autonomous mileage reports are out, but is thedata meaningful? http://bit.ly/AMRData , 2019.Online; accessed: 013-December-2020. [2] Vard Antinyan. Revealing the complexity of automotivesoftware. In
Proceedings of the 28th ACM Joint Meet-ing on European Software Engineering Conference andSymposium on the Foundations of Software Engineering ,pages 1525–1528, 2020.83] BAIDU APOLLO. http://bit.ly/ApolloAuto .Online; accessed: 013-December-2020.[4] The challenges of developing autonomous vehicles dur-ing a pandemic. http://bit.ly/ChallengesAD ,2020. Online; accessed: 01-December-2020.[5] AUTOWARE. . On-line; accessed: 01-December-2020.[6] Rohan Bandopadhay Banerjee.
Development of asimulation-based platform for autonomous vehicle algo-rithm validation . PhD thesis, Massachusetts Institute ofTechnology, 2019.[7] Fanta Camara, Nicola Bellotto, Serhan Cosar, FlorianWeber, Dimitris Nathanael, Matthias Althoff, JingyuanWu, Johannes Ruenz, André Dietrich, Gustav Markkula,et al. Pedestrian models for autonomous driving part ii:high-level models of human behavior.
IEEE Transac-tions on Intelligent Transportation Systems , 2020.[8] Demo of prius in ros/gazebo. https://github.com/osrf/car_demo , 2019. Online; accessed: 01-December-2020.[9] Carsim adas: Moving objects and sensors. http://bit.ly/CarSimMO , 2020. Online; accessed: 013-December-2020.[10] Qianwen Chao, Huikun Bi, Weizi Li, Tianlu Mao,Zhaoqi Wang, Ming C Lin, and Zhigang Deng. Asurvey on visual traffic simulation: Models, evaluations,and applications in autonomous driving. In
ComputerGraphics Forum , volume 39, pages 287–308. WileyOnline Library, 2020.[11] Alexey Dosovitskiy, German Ros, Felipe Codevilla, An-tonio Lopez, and Vladlen Koltun. Carla: An open urbandriving simulator. arXiv preprint arXiv:1711.03938 ,2017.[12] US DOT. Intelligent transportation systems-joint pro-gram.[13] Joshua Fadaie. The state of modeling, simulation, anddata utilization within industry: An autonomous ve-hicles perspective. arXiv preprint arXiv:1910.06075 ,2019.[14] Rui Fan, Jianhao Jiao, Haoyang Ye, Yang Yu, IoannisPitas, and Ming Liu. Key ingredients of self-drivingcars. arXiv preprint arXiv:1906.02939 , 2019.[15] Miguel C Figueiredo, Rosaldo JF Rossetti, Rodrigo AMBraga, and Luis Paulo Reis. An approach to simulateautonomous vehicles in urban traffic scenarios. In , pages 1–6. IEEE, 2009. [16] D. González, J. Pérez, V.Milanés, and F.Nashashibi. Areview of motion planning techniques for automated ve-hicles.
IEEE Transactions on Intelligent TransportationSystems , 17(4):1135–1145, 2016.[17] HERE HD live map. http://bit.ly/HERE_HDMaps . Online; accessed: 30-December-2020.[18] Mario Hirz and Bernhard Walzel. Sensor andobject recognition technologies for self-driving cars.
Computer-aided design and applications , 15(4):501–508, 2018.[19] W. Huang, Kunfeng Wang, Yisheng Lv, and FengHuaZhu. Autonomous vehicles testing methods review. In , pages 163–168, 2016.[20] Y. K. Hwang and N. Ahuja. Gross motion planning—asurvey.
ACM Comput. Surveys , 24(3):219–291, 1992.[21] Jelena Koci´c, Nenad Joviˇci´c, and Vujo Drndarevi´c. Sen-sors and sensor fusion in autonomous vehicles. In , pages 420–425. IEEE, 2018.[22] Nathan Koenig and Andrew Howard. Design and useparadigms for gazebo, an open-source multi-robot simu-lator. In , volume 3, pages 2149–2154. IEEE, 2004.[23] Sergio Lafuente-Arroyo, Pedro Gil-Jimenez,R Maldonado-Bascon, Francisco López-Ferreras,and Saturnino Maldonado-Bascon. Traffic sign shapeclassification evaluation i: Svm using distance toborders. In
IEEE Proceedings. Intelligent VehiclesSymposium, 2005. , pages 557–562. IEEE, 2005.[24] S. Li, K. Li, R.Rajamani, and J.Wang. Model predictivemulti-objective vehicular adaptive cruise control.
IEEETransactions on Control Systems Technology , 19(3):556–566, 2011.[25] Liangkai Liu, Sidi Lu, Ren Zhong, Baofu Wu, YongtaoYao, Qingyang Zhang, and Weisong Shi. Computingsystems for autonomous driving: State-of-the-art andchallenges.
IEEE Internet of Things Journal , November,2020.[26] Yuyan Liu, Miles Tight, Quanxin Sun, and Ruiyu Kang.A systematic review: Road infrastructure requirementfor connected and autonomous vehicles (cavs). In
Jour-nal of Physics: Conference Series , volume 1187, page042073. IOP Publishing, 2019.[27] J. Martinez and C. Canudas-De-Wit. A safe longitudi-nal control for adaptive cruise control and stop-and-go9cenarios.
IEEE Transactions on Control Systems Tech-nology , 15(2):246–258, 2007.[28] Automated driving toolbox. http://bit.ly/ToolboxMATLAB , 2020. Online; accessed: 013-December-2020.[29] Automated driving toolbox reference applications. http://bit.ly/AutomatedDrivingToolbox . Online;accessed: 30-December-2020.[30] Automated driving toolbox ground truth labeling. http://bit.ly/GroundTruthLabeling . Online;accessed: 30-December-2020.[31] Asam opendrive®. http://bit.ly/ASAMOpenDrive . Online; accessed: 30-December-2020.[32] Guodong Rong, Byung Hyun Shin, Hadi Tabatabaee,Qiang Lu, Steve Lemke, M¯artin , š Možeiko, Eric Boise,Geehoon Uhm, Mark Gerow, Shalin Mehta, et al. Lgsvlsimulator: A high fidelity simulator for autonomousdriving. arXiv preprint arXiv:2005.03778 , 2020.[33] HP Schöner. The role of simulation in development andtesting of autonomous vehicles. In Driving SimulationConference, Stuttgart , 2017.[34] SDF. http://sdformat.org/ . Online; accessed:05-December-2020.[35] SAE Standard. J3016: Taxonomy and definitions forterms related to on-road motor vehicle automated driv-ing systems, 2014, usa.[36] Kenta Takaya, Toshinori Asai, Valeri Kroumov, andFlorentin Smarandache. Simulation environment formobile robots testing using ros and gazebo. In , pages 96–101. IEEE, 2016.[37] Jie Tang, Shaoshan Liu, Chao Wang, and Chen Liu. Dis-tributed simulation platform for autonomous driving. In
International Conference on Internet of Vehicles , pages190–200. Springer, 2017.[38] Self-driving uber car kills pedestrian in arizona, whererobots roam. http://bit.ly/UberPed , 2018. On-line; accessed: 01-December-2020.[39] Unity. Unity technologies.[40] Unreal. Unreal engine technologies.[41] Jessica Van Brummelen, Marie O’Brien, DominiqueGruyer, and Homayoun Najjaran. Autonomous vehi-cle perception: The technology of today and tomorrow.
Transportation research part C: emerging technologies ,89:384–406, 2018.[42] Waymo is using AI to simulate autonomous vehicle cam-era data. http://bit.ly/WaymoAI , 2020. Online;accessed: 01-December-2020.[43] Off road, but not offline: How simulation helps advanceour waymo driver. http://bit.ly/WaymoBlog ,2020. Online; accessed: 01-December-2020.[44] Weijia Yao, Wei Dai, Junhao Xiao, Huimin Lu, andZhiqiang Zheng. A simulation system based on ros andgazebo for robocup middle size league. In , pages 54–59. IEEE, 2015.[45] E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda. Asurvey of autonomous driving: Common practices andemerging technologies.
IEEE Access , 8:58443–58469,2020.[46] Chi Zhang, Yuehu Liu, Danchen Zhao, and YuanqiSu. Roadview: A traffic scene simulator for au-tonomous vehicle simulation testing. In17th Inter-national IEEE Conference on Intelligent TransportationSystems (ITSC)