The Sandbox: the AVSandbox knowledge hub
The Sandbox knowledge hub discusses many of the crucial issues affecting the development, engineering, use and regulation of Autonomous Vehicles.
Scroll to explore
Sustainability, and not over-testing
Given the recent energy market, companies that use simulations have a surprise waiting for them on their balance sheets. Computation is getting expensive. Uncomfortably expensive. Engineering excellence to improve it is needed again.
The truth is, there always are costs to simulation. 1 hour of full-scale simulation in general generates approximately 1 kg of CO2. Delivering a complete Safety Case for self-driving cars, with the amount of proofing they will need, will cause many thousands of tonnes of emissions. Both price pressure and sustainability, demand smart thinking. How can we minimise the computational burden, while maximising the Safety impact?
Generally, when a system is complex, like self-driving cars are, every new layer of complexity increases the relational complexity between its components many times over. This is shown in Figure 1, where there are only so many nodes, each representing a subsystem of an AV, but far more ‘handshakes’ between them, that increase in number far quicker than the nodes do.
So when we try to find the faults of a self-driving car, we’re dealing with interactive factors that might bring it down in unexpected ways. Perception errors cascade into traffic movement prediction and they return the favour. Normal series testing to detect all these faults might look something like this: increment each input parameter systematically, until there’s a fairly comprehensive coverage of everything that’s possible. This is the trap of over-testing. The costs will stack up, as will the environmental pollution. It isn’t scalable for the whole industry, and it’s not necessary. With some thoughtful testing architectures, and the right mathematical tools, tests can adapt to finding the real patterns at play.
Imagining a space of two parameters, like in Figure 2, we can expect that there might be some boundary for either of them where a failure happens. Perhaps, there is some relationship between where these boundaries are. If we take a step back and acknowledge this relationship, we only need to run the ‘edge tests’ along it, where the pass/fail outcome changes. The other tests do not need to generate information, but only serve the purpose of finding the significant ones.
Scaled up to many interacting parameters and many unintuitive relationships, a quality algorithm for adaptive sampling would be required, perhaps informed by the analysis of machine learning. But the benefit is clear: a much, much reduced computational load, that provides more salvation as the dimensionality of the space increases.
This is just one of many strategies that will be needed. But what must be acknowledged by the industry is that simulation shouldn’t be applied lightly. You can easily fall into over-testing, wasting money and time, and polluting a fair amount. In other words, AV developers simulate smart!
Written by: Dr Marcin Stryszowski – Head of AV Safety & David Dubinsky – Project Engineer
Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion.
Ultra photorealism in AVSandbox
Thanks to the hard work of rFpro, ultrarealistic light rendering is now available in the AVSandbox toolkit. This is ...
Tackling High Development Costs: How AVSandbox Can Accelerate Your Autonomous Vehicle Deployment
Reducing costs of autonomous vehicle development without compromising AV Safety The development and successful deployment of autonomous vehicles is ...
Determinist Traffic Simulation
Introduction In my previous blog deterministic scenario simulation, I detailed why we define our simulator deterministic and what is ...