The Sandbox: the AVSandbox knowledge hub

The Sandbox knowledge hub discusses many of the crucial issues affecting the development, engineering, use and regulation of Autonomous Vehicles.

AVSandbox | Autonomous Vehicle Simulation

Scroll to explore

The Sandbox | Knowledge Hub | AVSandbox

What is Safety? What is Risk? The Basics

The Sandbox | Knowledge Hub | AVSandbox

Figure 1: Virtual testing of an autonomous vehicle in rFpro

Whether flying, driving, using a power socket or even dining in a restaurant, we take safety for granted, because we trust the safety processes behind it. Food freshness is regulated and inspected, electrical systems are equipped with circuit breakers, cars undergo crash tests and aviation is just the safest of them all.

The Colours of Safety

To say “Car A is two times safer than Car B” – what does it really mean? When developing a methodology for the estimation of safety, naturally we must begin with the definition of safety. Some say “it is safe” is synonymous with “risk being at acceptable level”, but what does it mean? It depends on the definition of safety. Every industry establishes its own safety standards, which begin with establishing the basic definitions – exactly what the autonomous vehicle industry is doing right now.

For aviation, any deviation from nominal operation is unsafe. In nuclear it is defined as a threshold radiation dose to the public, while in a restaurant it is about food poisoning. But what does it mean for Autonomous Vehicles?

Each idea of ‘Safety’ is different, and distinct from the ‘Perception of Safety’. To have the exact wording of the definitions defined by law would allow AV Safety Engineers to deliver a much better job, but what is even more critical is that there is industry-wide consensus. In the case of autonomous vehicles, we are only at the beginning of this path, waiting for best practices to emerge to build standards around. This is why today, learning from other industries is more important than ever.

Measuring Safety

Once the industry-wide concept of safety is agreed upon, we can think of methods of measuring it. This is where Risk comes handy. While safety can be vague – encompassing multiple aspects of driving, Risk is always understood as the severity of a safety-critical event, times the probability of it happening. In aviation, any accident easily becomes a national disaster, but the chance of its occurrence is extremely low, rendering it the safest mode of transport.

Road traffic authorities record the level of driving risk as yearly causalities per 100k miles. It both defines the level of severity – a fatal accident, as well as the likelihood of it occurring per driving distance. Then, there are secondary metrics, such as number of recorded collisions and minor injuries, as well as insurance cost incurred, which help deepen the understanding of the risk profile.

Specifically in the autonomous vehicle development sector the focus for assessing safety is on the AI enabled perception and control of the vehicle, not on the mechanical resilience. Without damage modelling, the impact of a collision cannot be computed. Since there is little industry guidelines, accurate modelling of risk and thus system-level analysis is challenging.

Moments of uncertainty like this – how to measure severity without expensive crash tests – is where safety guidance documentation is needed and we will discuss it in future posts. For now, at Claytex, we use a Normalised Severity Factor D, which takes into account kinetic energy and collision incidence angle as in Fig. 1. We acknowledge the vast simplicity of this approach, but its mathematical properties render it a convenient model for prototyping, while we wait for industry standards to emerge.

The Sandbox | Knowledge Hub | AVSandbox

Figure 2: Normalised Severity Factor: a placeholder for comparing the level of damage between varying collision characteristics, while we await industry-wide risk modelling standards.

Drawing from the nuclear safety culture – where every design choice must be inherently safe, we have just outlined the single weakest link of our Safety Estimation Framework. Every other design decision follows similar safety consideration and is thoroughly documented, demonstrating complete transparency and explainability for every parameter and result.

Autonomous Driving Safety Standards

We have presented the basic concepts and how Claytex builds upon them to deliver safety as a service. We have noted the need for regulation, standardisation and guidelines, but so far have neglected the already present and established AV safety standards. There is a number of standards, such as ISO 26262, which provides AV design architecture framework, or BSI PAS1883 which provides testing domain definition, and discusses preparation of safety cases, but never defines what safety is. There is also UL 4600, focused on qualitative safety design guidance in a #DidYouThinkOfThat? manner using, assuming that if the design is ‘safe’, then failure will not happen. Most relevant is ISO 22737, which offers a form of AV driver’s test, that certifies a certain level of safety. The tests specified involve only two agents (including the ego), and are deemed passed after 5 successful iterations.

So what is the problem?

Firstly, the difference between system safety and system reliability, and secondly challenges associated with the rare event estimation.

The system reliability theory assumes that EVERYTHING will break eventually. Materials degrade when in cyclic use, from vibration, UV light, corrosion, dust, thermal cycling, and accidental scratches. A system that has been deemed safe when it left the dealership, sooner or later will fail.
The job of safety engineer is then not to say “it is safe”, but to estimate when will it eventually fail.

Rare event estimation, in turn, is very difficult. Let us return to the ISO 22737, where a test is passed after 5 successes. Imagine that a failure happens, pessimistically, once every 100 uses, 1%. Repeating the test 5 times, yields the chance of detecting a failure equal to 4.9%. In order to have a level of confidence of 50% (meaning the test series is 50% likely to detect an unsafe system) the test needs to be repeated 69 times. To achieve a confidence level of 90% requires 230 repetitions. This example still only assumes a simple, 2-agent scenario. What about unlikely, but plausible scenarios or common-cause malfunctions? Still, modelling those is complex on its own, due to epistemic uncertainty. This will be discussed in the next article.

Towards Safety-as-a-Service

Delivering safety is not straightforward. Guidelines and standards are the key to delivering it reliably and consistently. While we actively participate in standard development, our Safety Estimation Framework is being developed in parallel. Therefore, as soon as novel regulation emerges, we are then ready to implement it, ensuring our methodology is the most robust, traceable and explainable.

Written by: Dr Marcin Stryszowski – Lead Engineer

Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion.

The AVSandbox Knowledge Hub

Discover more about what makes AVSandbox unique. Explore our AVSandbox knowledge hub and find out about the issues, challenges and exciting developments that are behind the growth of the market for autonomous vehicles and advanced driver assistance systems.

Go to Top