Modified version of LGSVL Automotive Simulator with added models of micromobility vehicles such as electric scooters, skateboards, hoverboards, segways and one-wheels.
This project was originally forked from lgsvl/simulator and is based on their April 2019 release.
Due to size limitations of Git LFS, entire Unity project could not be hosted on GitHub. It is instead uploaded to Dropbox and can be downloaded here.
Releases are only available for Linux. You can build your own windows or linux binaries using the instructions below.
- With Autoware Ego-car and many micromobility vehicles: Supports rendering of object detections directly from Autoware. Download
- With Apollo Ego-car and manually controlled e-scooter: Only contains a single scooter that can be manually controlled using WASD keys on the keyboard. Download
- With Apollo Ego-car and many micromobility vehicles: Perfect for data collection. Download
Please note that the first build can take upwards of 45 minutes to build. Subsequent builds will be much faster. Please close the Unity Editor while building as build cannot be completed with the editor open. You can build for Linux from Windows if you installed Linux build target while installing Unity.
C:\path\to\Unity\Editor\Unity.exe -batchmode -nographics -silent-crashes -quit
-buildDestination C:\output\folder\simulator.exe
-buildTarget Win64 -executeMethod BuildScript.Build
-projectPath C:\path\to\simulator\source\code
-logFile log.txt
path/to/Editor/Unity -batchmode -nographics -silent-crashes -quit
-buildTarget Linux64 -executeMethod BuildScript.Build
-buildDestination /output/folder/simulator
-projectPath /path/to/simulator/source/code
-logFile log.txt
We added the following micromobility vehicles to the original simulator.
- Solid modeling of the vehicle in a CAD software.
- Solid model is then saved as a mesh in .STL format.
- Import mesh into Blender and add materials to the model. Here the Origin is translated to Center of Gravity of the model and the axis are rotated to match defaults followed by Unity. Not doing this can lead to a lot of headache in manipulating the vehicle in Unity later.
- This model is then saved in .FBX format and imported into a Unity test scene as a
GameObject
. The model is scaled to correct dimensions, if necessary. - Add
RigidBody
component to the model and add a realistic mass value. Humanoid used here is downloaded from the Unity Asset Store here. Initially, the humanoid is configured in a T-orientation. Limbs and joints need to be manipulated to appropriate positions. - A
BoxCollider
is added to theGameObject
and its boundaries are scaled to fit the entire model. It is then saved as a prefab and imported into the San Francisco Scene. To make handling of these vehicles easier, we created separate layers for each vehicle type listed above and distributed them into these layers.
Please follow these steps if you would like to add your own vehicles.
LGSVL simulator provides separate ego-cars configured with sensor suites to work with Apollo and Autoware self-driving stacks respectively. At this stage, the sensors on these ego-cars cannot "see" the new vehicles we added. More modifications need to be made in order for these vehicles to be perceived by ego-cars as NPCs (Non-playable characters).
- Adding Ground Truth Sensors for micromobility vehicles: In order to detect ground truths for micro-mobility vehicles, we added two additional sensors to the ego-car –
MMGroundTruth2D
andMMGroundTruth3D
– for 2D and 3D ground truth bounding boxes respectively. These sensors are similar to the existing ground truth sensors except that they only output boxes for micro-mobility vehicles’ layers. Bounding box colors are defined in this sensor and toggle switches to turn these sensors on or off are also added. These sensors publish to the following topics:
MMGroundTruth2D
- Topic:
/simulator/ground truth/mm_2d_detections
- Message type:
Detection2DArray
- Topic:
MMGroundTruth3D
- Topic:
/simulator/ground truth/mm_3d_detections
- Message type:
Detection3DArray
- Topic:
- Modifying Culling Masks: Users can selectively choose the layers that perception sensors in the car, such as cameras, LiDAR and depth sensors, can ”see” by choosing them the
CullingMask
selector. By default, the newly added layers for micromobility vehicles are not added to theCullingMask
. Therefore, the new vehicles need to be selected in theCullingMask
selector for the sensors to render them. - Modifications to Perception sensors: The number of channels in the LiDAR sensor were changed from 16 (default) to 64. In addition, motion blur was removed from the
DriverCamera GameObject
as it resulted in blurry images at lower frame rates. - Modifications to
NeedsBridge
list: TheNeedsBridge
list contains references to all components that are only instantiated when a ROSBridge server is active and the simulator is connected as a client. This is done so that the simulator does not waste resources and publish messages to topics if no nodes are subscribing to them over ROSBridge. By default,MMGroundTruth2D
,MMGroundTruth3D
and the depth camera are not added toNeedsBridge
list. The script components of these sensors are added toNeedsBridge
list in order to send messages over ROSBridge.
Currently, only controlling e-scooters is supported. To enable controlling an e-scooter, select the GameObject
and check its Script component. You will then be able to control it with WASD keys.
ROS packages developed for this project are available on this repository.
lgsvl_data_collector
ROS package available here is developed to collect data from this simulator. This package collects main camera images, depth camera images, LiDAR point clouds and Micromobility 2D and 3D Ground Truth annotations. We have provided the datasets we collected below.
This repository shows how we trained YOLOv3 Object Detection algorithm on the dataset we collected from the modified simulator. Please follow the instructions on the repository if you would like to perform your own training on your own datasets.
Hyperparameters chosen for training are provdided in detail in the project presentation here
With the Simulator and model inference running on the same GPU, we were able to achieve ~18 FPS performance. We provide two methods of visualizing inference results on camera images captured in realtime from the simulator. Please use lgsvl_mm_perception
ROS package from the above mentioned repository to run inference.
To publish detections directly to the simulator, use the Autoware ego-car.
We would like to thank Martins Mozeiko and Brian Shin from LGSVL lab for supporting us and providing technical help throughout this project.
- Deepak Talwar - (https://github.com/deepaktalwardt)
- Seung Won Lee - (https://github.com/swdev1202)