The Sandbox: the AVSandbox knowledge hub

The Sandbox knowledge hub discusses many of the crucial issues affecting the development, engineering, use and regulation of Autonomous Vehicles.

AVSandbox | Autonomous Vehicle Simulation

Scroll to explore

The Sandbox | Knowledge Hub | AVSandbox

Legal proof the AV is safe enough

For a senior manager of an ASDE (Authorised Self-Driving Entity), in accordance with the Law Commission Report on Self-Driving (chapter 11.6, p. 210), failure to demonstrate leadership and structural commitment to AV safety, classified as connivance, is soon to be viewed as a criminal offence.

It is serious; only a documented Due Diligence Defence, prepared in advance, guarantees we can focus on excellence. In two previous articles (Measuring Safety – Why AI cannot do it: Rare Event Analysis & Measuring Safety – the ‘99% reliable’ fallacy), we outlined how challenging the mathematics are of providing confidence to the required probability estimates.

In this seemingly difficult situation, how do you demonstrate the ultimate proof that your self-driving car is indeed safe enough for public roads?

In short: Fail-safe System Architecture.

A system, be it a self-driving car, an aircraft or a plant, that fails more often, but fails gracefully, is much safer than a seemingly unsinkable super-system. It allows to understand the failure-modes, manage challenging, epistemic risks and crucially, prevents the user/operator from becoming complacent.

The regulations do not call for probabilities, but for a token of understanding of their balance. While each and every subsystem can have stunning safety metrics, as the system architecture gets integrated, the probability of failure grows. The smallest failure of any of the in-series components induces a system-critical failure.

With the right architecture, a hierarchical structure, visualized in Figure 1, that gets the priorities right: safety first, high-level optimization at the top; it is easier. High level controls can fail. It can fail often! But fail gracefully: delivering safety insights, driving the every-day safety improvements every day, and building a healthy attitude towards operational safety.

The Sandbox | Knowledge Hub | AVSandbox

Figure 1: Sensor specialization & diversification as a driver for fail-safe architecture. Narrow-band sensors can be tuned to detect specific objects, such as vulnerable road users, drivable surface (tarmac), etc.

Fail often? How is it a good thing? In another blog post, ‘Providing the Safety Case: is it Deterministic or Stochastic?, we spoke about the challenges related to not-knowing what can go wrong – the epistemic risk.
Self-driving cars will be a novelty in traffic. Figure 2 visualizes a simple example of an AI failing. We can try to apply our human imagination to figure it out, but we will always be surprised by the way it fails.

Figure 2. What can go wrong? The failure modes of safety-critical AI-systems are unforeseeable.(Source: openai.com)

That’s why with the correct architecture, we can afford the space to fail gracefully and learn from it, while protecting the public and showing, on the balance of probabilities that we did, do, and will continue to demonstrate our commitment to the public safety.

Written by: Dr Marcin Stryszowski – Head of AV Safety

Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion.

 

The AVSandbox Knowledge Hub

Discover more about what makes AVSandbox unique. Explore our AVSandbox knowledge hub and find out about the issues, challenges and exciting developments that are behind the growth of the market for autonomous vehicles and advanced driver assistance systems.

Go to Top