SimNet: Learning Reactive Self-driving Simulations from Real-world Observations

Luca Bergamini, Yawei Ye, Oliver Scheel, Long Chen, Chih Hu, Luca Del Pero, Błazej Osinski, Hugo Grimmett and Peter Ondruska

ICRA 2021


Why simulation?

Road testing is:

  • Expensive

  • Time consuming

  • Non reproducible

How it works

We train SimNet using behavioural cloning on the Lyft L5 dataset

At each frame, SimNet predicts the next position of each agent independently and the next frame is updated


Examples of agents being controlled by SimNet. SimNet agents exhibit realistic behaviours across different scenes.



Compared to log-replay agents, SimNet agents can react properly to the SDV behaviours.

SimNet error decreases when more data is available for training.

Evaluating planning system

We implemented and tested an existing ML planner based on [1] using both log-replay and SimNet agents. SimNet decreases false positives and exposes false negatives errors of the planning system.

While results for some metrics are comparable, there are two exceptions: rear collisions (false positives) and passiveness (true positives)

Reducing false positives


The car behind the SDV runs over it


The same car keeps a safe distance when SimNet controls it

Discovering false negatives


The SDV looks at the car behind to sprint back


The car behind is now waiting for the SDV to start



title={SimNet: Learning Reactive Self-driving Simulations from Real-world Observations},

author={Bergamini, Luca and Ye, Yawei and Scheel, Oliver and Chen, Long and Hu, Chih

and Del Pero, Luca and Osiński, Błażej and Grimmet, Hugo and Ondruska, Peter},

booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)},