Building trust in AI through a cyber risk-based approach
The international agencies and government authorities behind this document advocate for a risk-based approach to support trusted AI systems and for secure AI value chains, and call for the discussion to continue beyond the AI Summit, to equally address opportunities, risks and evolving cyber threat in the context of AI adoption.
AI, a transformative technology under development since the 1950s, now impacts almost every sector from defence to energy, health to finance and many others. Its rapid adoption, including the use of large language models (LLM) and increasing reliance on AI, should encourage stakeholders to assess related risks, including the ones associated to cybersecurity.
Without adequate measures – and given that users still tend to underestimate AI-related cyber risks – malicious actors could exploit vulnerabilities of AI systems and jeopardize the use of AI technology in the future. It is therefore crucial to understand and mitigate these risks, to foster trusted AI development and fully embrace the opportunities that this technology offers.
While the matter of AI-enhanced solutions, whether defensive or offensive, is already well addressed both in academic papers and in various frameworks currently being developed, this document focuses on the cybersecurity of AI systems. It aims to provide a high-level synthetic and comprehensive analysis of related cyber risks and to offer guidance to assess threats and implement adequate security measures building on the Guidelines for Secure AI Systems Development, developed in collaboration with over 20 international organizations and jointly released on November 2023.
This risk analysis aims to consider not only the vulnerabilities of individual AI components, but also the security of broader AI systems integrating these components. Its purpose is to provide a wide overview of AI-related cyber risks rather than an exhaustive list of vulnerabilities.