The Sandbox: the AVSandbox knowledge hub
The Sandbox knowledge hub discusses many of the crucial issues affecting the development, engineering, use and regulation of Autonomous Vehicles.
Scroll to explore
From Cybersecurity to Safety
Cybersecurity was the name of the game for the 2022. It appeared as a leading topic at multiple conferences and events. But what is cybersecurity for you? Traditional articles would now provide a definition of cybersecurity. But let us take a moment to appreciate the diversity of talents in the automotive industry.
Industry-Specific Safety
Making an Autonomous Car drive, requires expertise from multiple branches of engineering to collaborate: mechanical, manufacturing, motor propulsion, battery electrochemistry, control electronics, sensors, finally AI and behavioural science. Each ‘tribe’ has different challenges, different jargons, and different toolboxes.
An AI engineer, speaking of safety, will mention the performance metrics and resilience, reliability and operational excellence – these are inseparable. A sensor engineer, when enquired about safety, will reply – “it is safe, you don’t need safety goggles to operate with it”.
Operational Cybersecurity itself has many sides to it – post-quantum encryption, webs of honeypots and penetration testing as a constant race against oneself – just a surface with a whole domain of science behind it.
Automotive Cybersecurity
But, in the light of the diversity of engineering ‘tribes’, an interesting jargon developed. Ask a network cybersecurity expert about security, and robust network architecture, CAN (Car Area Network), and penetration testing come to their mind. Then, ask a sensor / AI engineer the same question. You will learn a lot about adversarial attacks Crucially, a Cyber Attack requires an adversary, a bad guy, whose intention is to hinder performance, or safety of our AV. As such, we can consider an adversarial attack on sensors as an intelligent, malevolent effort to inject carefully designed data, to carefully manipulate the models with change in the pixels in the image in order to confuse the sensor, and the perception algorithm behind it into misperception of a possibly Vulnerable Road User. A fantastic example of such attack is visualised in Fig. 1; this notion proposes that an image classifier can be manipulated, to misclassify or misdetect an object.
Why do we bring this up? When it comes to safety for AVs, the main risk is that a sensor fails to identify an object, especially a Vulnerable Road User (VRU). However, in the infinite complexity of road situations, unless a pedestrian is not wearing a shirt with pixel-encoded attack printed on, an adversarial attack is not possible on public road.
Hence, what many of our customers, that is AI/sensor engineers, call sensor security, we call AV Safety. Interestingly, the tools to model these phenomenon’s are exactly the same. Sensor security differs from sensor safety only in the level of intentionality introduced. The AV safety question is: what could go wrong? Is there anything that we did not consider?
Sensor Fault Injection Testing
Fault Injection Testing is an engineering process, where we consider the whole system architecture, and consider, what is the system’s response to a malfunction. In the case of AV’s sensing and perception layers, it involves proving ground operations with destructive tests of components, which also causes risk of the AV crashing and injuring the team. Alternatively, model-based failure modes of the sensing stack of the ADS, contained within a sensor-realistic simulation environment offer a risk-free and affordable way forward.
Figure 2 visualises Claytex’s capability in modelling sensors, with unrestricted, code-level control over sensor biases, noise and distortion. It enables a level of control over the simulation, that permits tests involving introducing deliberate errors or malfunctions. We then can measure the response of the ADS: Did it perform a Minimal Risk Manoeuvre? Did it just carry on or maybe even crash? Was it a catastrophic failure, or a graceful failure?
Making ADS Safe by Design
Whether we call it cybersecurity, safety, adversarial attacks or fault injections, delivering an environment where AVs can be tested safely, is our duty to the public. The sensors in the AVs play a vital role, and a precise modelling and simulation environment is the key enabler to finding graceful failure modes of components that will fail inevitably.
Written by: Marcin Stryszowski – Head of AV Safety and Jawad Samsi – Software Engineer
Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion
Ultra photorealism in AVSandbox
Thanks to the hard work of rFpro, ultrarealistic light rendering is now available in the AVSandbox toolkit. This is ...
Tackling High Development Costs: How AVSandbox Can Accelerate Your Autonomous Vehicle Deployment
Reducing costs of autonomous vehicle development without compromising AV Safety The development and successful deployment of autonomous vehicles is ...
Determinist Traffic Simulation
Introduction In my previous blog deterministic scenario simulation, I detailed why we define our simulator deterministic and what is ...