By eliminating the most common cause of traffic accidents - humans, and introducing autonomous vehicles into traffic, the number of accidents and deaths is expected to decrease. However, these vehicles also represent a completely different type of risk for drivers, passengers and pedestrians.
Autonomous vehicles are artificial intelligence systems that use machine learning techniques to collect, analyze and transfer data, all in order to be able to make the decisions that people make in conventional cars.
However, as zimo.dnevnik.hr writes, like any other IT system, artificial intelligence is also susceptible to cyber, hacker attacks that can jeopardize the proper functioning of vehicles and cause major traffic problems.
What's more, you don't have to be a hacker or an IT expert to streamline the operation of autonomous vehicles - this would be enough to make this happen: add paint to roads to fool navigation or put a sticker on traffic signs, say Stop, to prevent it is recognized by the artificial intelligence system (AI), are just some examples of such possible attacks.
Such situations can lead the AI system to misclassify objects, and that is why it starts behaving dangerously in traffic.
Autonomous vehicle AI systems work constantly to recognize traffic signs, then other vehicles, as well as to assess their speed in order to plan further travel. If we omit unintentional threats, such as sudden failures, these systems are vulnerable to international attacks that have the specific goal of disrupting AI systems and disrupting traffic safety functions.
A new report by the European Union's Cybersecurity Agency (ENISA) and the Research Center (JRC), the European Commission's science and knowledge service, points to cybersecurity risks associated with the introduction of artificial intelligence in autonomous vehicles and makes recommendations on how to avoid them.
When an endangered autonomous vehicle crosses the border of a member of the European Union, its vulnerabilities also cross the border. Safety in that sense should not be a subsequent thought, but a precondition for the reliable release of vehicles on European roads, points out Juhan Lepassaar, Executive Director of ENISA. It is also important that European laws ensure that the benefits of autonomous driving do not jeopardize safety risks.
How can this be prevented
The report also makes several recommendations on how to improve the AI safety of autonomous vehicles.
One of these recommendations is to regularly check AI components throughout their lifetime. Systematic verification of AI models and data is key to ensuring that the vehicle always functions properly when faced with unexpected situations or a malicious attack.
The second recommendation says that continuous risk assessments should be conducted with the help of threat intelligence, which would enable the identification of potential AI risks and the emergence of new threats related to the introduction of artificial intelligence in autonomous driving.
Adequate AI safety policy and AI safety culture should govern the overall supply chain in the automotive industry.
The automotive industry should also embrace a new approach to design for the development and use of AI functionality, where cybersecurity would become a central element of digital design from the outset.
Finally, it is important for the automotive sector to raise its preparedness level and further strengthen incident response capabilities, so that they can cope with the emergence of cybersecurity problems associated with artificial intelligence.