You’ve probably read or heard about it in the media over the last few days: the European institutions reached an agreement on the future regulation of Artificial Intelligence on Friday December 8. This is the famous European Artificial Intelligence Act, the broad outlines of which I had already outlined in a previous article.
At the time, I didn’t mention the regulation of general-purpose models, as this was still under discussion. Now it’s time to make up for this omission.
What follows is based on information available 48 hours after the agreement. The detailed text of the agreement is not yet known; it should be published before January 22, the date of the first parliamentary committee meeting on the subject. However, my aim is not to go into detail, but just to give you an idea of the approach adopted.
1. Why general-purpose AI complicates regulation
General AI models appeared a few years ago. They are defined according to the modality they process (text, image, video, 3D…) and their discriminative or generative nature.
These models are characterized by a broad spectrum of applications, and their great advantage is that they can be refined to perform a specialized task with precision. This refinement can be carried out by another company with far fewer resources than those needed to train the basic model. A generative text model like GPT3 can therefore be adapted to perform different tasks in different sectors (e.g. chatbots for customer service).
As a result, the general-purpose AI value chain may involve several players: an upstream player who develops a powerful general-purpose model and makes it available to downstream players who will refine and exploit the model, in turn marketing it to end-users.
This multiplication of players does not fit in well with the logic of the EU AI Act, which is based on risk for the end-user. This logic is appropriate for an AI application developed by an organization for a specific purpose, but if we apply this logic to general-purpose AI only downstream players will be directly subject to regulation. Upstream actors will only be regulated indirectly, through the “percolation” of requirements placed on downstream actors. Not very balanced if you’re a small start-up exploiting a model developed by Google or OpenAI… and given the technically central role of the upstream player, the risks are not regulated at their source.
It was therefore necessary to define a different set of regulations for general AI. These will apply specifically to the upstream player. This does not entirely absolve the downstream player, who remains subject to regulatory constraints based on user risk, but the latter can at least rely on the compliance of the generalist model on which it is based.
2. Generalist AI regulations
This regulation distinguishes between two categories of models on the basis of their power: the most capable models are called “systemic”, as opposed to the others.
All general-purpose models are subject to transparency requirements: they must document in detail the architecture of the model as well as the dataset used to train it, and confirm respect for copyright. The content generated by a generative model must be recognizable as such.
Moreover, models considered “systemic” will be subject to additional requirements: their creators will have to carry out model evaluations, demonstrate how they manage and mitigate risks, notify the authorities in the event of an incident, and demonstrate their resilience in the face of cyber-attacks.
General open-source models will benefit from lighter regulation (at least for non-systems), but the nature of this is not yet clear.
All these requirements will be detailed and specified through harmonized European standards to be drawn up by bodies such as CEN/CENELEC’s AI committee, once the Act has been passed.
3. References
- Press release from the Council of the European Union of December 9, 2023: https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/
- European Parliament press release, December 9, 2023: https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai
- Official video of the December 8, 2023 press conference: https://video.consilium.europa.eu/event/en/27283
- European Union squares the circle on the wold’s first AI rulebook, by Luca Bertuzzi (Euractiv) on December 9, 2023: https://www.euractiv.com/section/artificial-intelligence/news/european-union-squares-the-circle-on-the-worlds-first-ai-rulebook/
- AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement, by Luca Bertuzzi ( Euractiv), December 7, 2023: https://www.euractiv.com/section/artificial-intelligence/news/ai-act-eu-policymakers-nail-down-rules-on-ai-models-butt-heads-on-law-enforcement/
Translated with DeepL and adapted from our partner Arnaud Stevins’ blog (Dec. 25th, 2023).
December 11th, 2023