Security recommendations for a generative AI system
The document aims to raise awareness among public authorities and private entites of the risks associated with generative AI, and to promote good practice in the implementation of this type of system.
The recent craze for generative Artificial Intelligence (AI) products and services, some of which are now readily available to the general public, has prompted public and private sector organizations to consider the productivity gains that could result.
While this technology opens up new prospects, a cautious approach is called for when deploying and integrating it into an existing information system.
ANSSI's security recommendations for a generative ai system focus on securing a generative AI system architecture. It aims to raise awareness of the risks associated with generative AI, as well as to promote best practices to be implemented from the design and training phase of an AI model, all the way through to deployment and use in production.
Security issues linked to data quality and the performance of an AI model from a business point of view are not covered in this guide, nor are issues such as ethics, privacy, or the protection of personal data.
Generative AI is a subsection of artificial intelligence, focused on creating models trained to generate content (text, images, videos, etc.) from a specific corpus of training data.
This includes Large Language Models (LLM), which are capable of generating a response to a question in natural language using a model trained on very large volumes of data.