The Sandbox: the AVSandbox knowledge hub
The Sandbox knowledge hub discusses many of the crucial issues affecting the development, engineering, use and regulation of Autonomous Vehicles.
Scroll to explore
AUTOWARE: What, How & Where?
Founded in 2015 at Nagoya University in Japan, Autoware is an open-source software for autonomous vehicles and embedded systems providing a complete set of self-driving modules, including localization, detection, prediction, planning, and control. It is a framework built on top of ROS for autonomous vehicle development, testing and validation against various scenarios. There are currently 3 versions of Autoware as shown below. This blog post will focus on its most acclaimed version Autoware.AI.
Autoware.AI is the original project built on ROS 1, launched as a R&D platform for autonomous driving technologies with the intention to be sufficient for SAE level 3/4. It supports many vehicle platforms and contains launch files, configuration files, sample neural networks and sample maps in addition to multiple implementations of algorithms in control, localization, perception, planning and simulation. Multiple simulators already provide support for Autoware including CARLA and LGSVLand this is where our real interest in this project comes from as we are working to interface rFpro to ROS and Autoware.
Figure: Overview of Autoware Architecture source: https://www.cnx-software.com/2019/02/07/autoware-open-source-software-autonomous-driving/
System Architecture:
This section will describe Autoware system architecture. It is important to highlight that Autoware is designed for urban cities, but that highways and freeways can also be covered; however it will require additional modules.
Sensing:
Autoware mainly recognizes road environments with the help of LIDAR scanners and cameras. LIDAR scanners measure the distance to objects by illuminating a target with pulsed lasers and measuring the time of the reflected pulses. Point cloud data from LIDAR scanners can be used to generate digital 3D representations of the scanned objects. Cameras are predominantly used to recognize traffic lights and extract additional features of the scanned objects.
To achieve real-time processing, Autoware filters and pre-processes the raw point cloud data obtained from the LiDAR scanners. Data from other sensors such as radars, GNSS, and the IMU can be used to refine the localization, detection, and mapping.
Computing:
An essential component of any AV, it uses sensor data and 3D maps so the vehicle can compute the final trajectory and communicate with the actuation modules. The core modules used are perception, decision-making, and planning.
- Perception: Safety in autonomous vehicles is a high priority issue. Therefore, the perception modules must calculate the accurate position of the ego-vehicle inside a 3D map and recognise objects in the surrounding scene as well as the status of traffic signals.
- Decision: Once the obstacles and traffic signals are detected, the trajectories of other moving objects can be estimated. Mission planning and decision-making modules use these estimated results to determine an appropriate position to which the ego-vehicle should move. Autoware implements an intelligent state machine to understand, forecast, and make decisions in response to the road status. Moreover, Autoware also allows persons in the ego-vehicle to supervise automation, overwriting the state determined by this module.
- Planning: This module generates trajectories following the output from the decision-making module. Path planning can be classified into mission and motion planning. Autoware decides a global trajectory based on the current location and the given destination. Local trajectories, on the other hand, are generated by the motion planning module, along with the global trajectories.
Actuation:
Once local trajectories are determined, autonomous vehicles need to follow them. Path planning generates the actuation commands for the ego vehicle such as pure-pursuit or MPC.
Systems Requirements:
As mentioned before Autoware is based on the Robot Operating System (ROS) and other open-source software libraries such as:
- Point Cloud Library (PCL) is mainly used to manage LiDAR scans and 3D mapping data, in addition to performing data-filtering and visualization functions.
- CUDA is a programming framework developed by NVIDIA and is used for general-purpose computing on GPUs (GPGPU). To handle the computation-intensive tasks involved in self-driving, GPUs running CUDA are promising solutions for self-driving technology, though this article does not focus on them.
- Caffe is a deep learning framework designed with expression, speed, and modularity in mind.
- OpenCV is a popular computer vision library for image processing.
Autoware uses such tools to compile a rich set of software packages, including sensing, perception, decision making, planning, and control modules.
In the next blog post I will be going through the ROS installation and its integration with Autoware.
Written by Amina Hamoud – Project Engineer
Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion
Ultra photorealism in AVSandbox
Thanks to the hard work of rFpro, ultrarealistic light rendering is now available in the AVSandbox toolkit. This is ...
Tackling High Development Costs: How AVSandbox Can Accelerate Your Autonomous Vehicle Deployment
Reducing costs of autonomous vehicle development without compromising AV Safety The development and successful deployment of autonomous vehicles is ...
Determinist Traffic Simulation
Introduction In my previous blog deterministic scenario simulation, I detailed why we define our simulator deterministic and what is ...