SimNet: Learning Reactive Self-driving Simulations from Real-world Observations

Luca Bergamini, Yawei Ye, Oliver Scheel, Long Chen, Chih Hu, Luca Del Pero, Błazej Osinski, Hugo Grimmett and Peter Ondruska

ICRA 2021

Video

Why simulation?

Road testing is:

  • Expensive

  • Time consuming

  • Non reproducible

How it works

We train SimNet using behavioural cloning on the Lyft L5 dataset

At each frame, SimNet predicts the next position of each agent independently and the next frame is updated

Examples

Examples of agents being controlled by SimNet. SimNet agents exhibit realistic behaviours across different scenes.

Log-replay

SimNet

Compared to log-replay agents, SimNet agents can react properly to the SDV behaviours.

SimNet error decreases when more data is available for training.

Evaluating planning system

We implemented and tested an existing ML planner based on [1] using both log-replay and SimNet agents. SimNet decreases false positives and exposes false negatives errors of the planning system.

While results for some metrics are comparable, there are two exceptions: rear collisions (false positives) and passiveness (true positives)

Reducing false positives

Log-replay

The car behind the SDV runs over it

SimNet

The same car keeps a safe distance when SimNet controls it

Discovering false negatives

Log-replay

The SDV looks at the car behind to sprint back

SimNet

The car behind is now waiting for the SDV to start

Cite

@inproceedings{bergamini2021simnet,

title={SimNet: Learning Reactive Self-driving Simulations from Real-world Observations},

author={Bergamini, Luca and Ye, Yawei and Scheel, Oliver and Chen, Long and Hu, Chih

and Del Pero, Luca and Osiński, Błażej and Grimmet, Hugo and Ondruska, Peter},

booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)},

pages={--},

year={2021},

organization={IEEE}

}