When does an entity become a provider of a GPAI model under the AI Act?
Analysing some initial insights from the European Commission on fine-tuning foundation models

Consider the following scenario:
US company develops a multi-modal foundation model and makes it available to others to further engineer or modify
EU company procures the foundation model and fine-tunes it with its own training data for specific use case and then either:
(A) makes the model available to others outside the company
(B) makes the model available to employees
Under the EU AI Act, the EU company in (A) is a provider and (B) is a deployer. You can see my previous post on the definition of providers and deployers under the Act.
But in both (A) and (B), what is the EU company a provider or deployer of exactly? Is it a general-purpose AI (GPAI) model, an AI model or something else?
There two types of assets that the AI Act defines in the context of the GPAI supply chain; GPAI models and GPAI systems.
A GPAI model is an AI model that:1
Is trained with a large amount of data
Displays significant generality
Can perform a wide range of distinct tasks
Can be integrated into a variety of downstream systems (i.e., AI engineering)
This essentially refers to generative AI models, including LLMs.2 Contrastingly, a GPAI system is an AI system based on a GPAI model that (a) can be used for a variety of purposes, and (b) can be used directly or integrated with other AI systems.3
So a GPAI model becomes a system when it is combined with "further components", such as a user interface.4 Accordingly, OpenAI's ChatGPT is a GPAI system which underpinned by OpenAI's various foundation models like o3 which are GPAI models.
Returning back to the scenario presented at the beginning of this post, if a company takes a foundation model and fine-tunes it with its own data to produce a new model designed for a specific use case, is this still a GPAI model or is it just an AI model? This is important because providers of GPAI models have specific, and fairly onerous, obligations under the Act.
The EU AI Office, which sits within the European Commission, has published a set of FAQs which covers this issue among others regarding the GPAI provisions in the AI Act:
General-purpose AI models may be further modified or fine-tuned into new models (recital 97 AI Act). Accordingly, downstream entities that fine-tune or otherwise modify an existing general-purpose AI model may become providers of new models. The specific circumstances in which a downstream entity becomes a provider of a new model is a difficult question with potentially large economic implications, since many organisations and individuals fine-tune or otherwise modify general-purpose AI models developed by another entity.
In the case of a modification or fine-tuning of an existing general-purpose AI model, the obligations for providers of general-purpose AI models in Article 53 AI Act should be limited to the modification or fine-tuning, for example, by complementing the already existing technical documentation with information on the modifications (Recital 109). The obligations for providers of general-purpose AI models with systemic risk in Article 55 AI Act should only apply in clearly specified cases. The AI Office intends to provide further clarifications on this question.
Regardless of whether a downstream entity that incorporates a general-purpose AI model into an AI system is deemed to be a provider of the general-purpose AI model, that entity must comply with the relevant AI Act requirements and obligations for AI systems.
What can we deduce from this?
GPAI models, like foundation models, can be fine-tuned or modified by entities other than the original developer.
If GPAI models are fine-tuned or modified, then the Commission seems to take the view that the entity making the modification or fine-tuning is a provider of a (new) GPAI model. Accordingly, at least some of the obligations pertaining to these models should apply. This is, however, subject to further clarifications from the EU AI Office.
Regardless, if the entity then adds other components to the fine-tuned GPAI model, thereby making it an AI system, then the relevant provisions for AI systems will apply. This means that the system will fall within one of the relevant risk categories, with the respective obligations applying to that system.
A key issue here is whether a GPAI model can ever stop being general-purpose? For example, if a foundation model (a GPAI model) is fine-tuned with data for a specific task or domain and its instructions for use dictate that it can only be used for that specific task or domain, does it cease to become a GPAI model and instead become an AI model?
You could say the entity has made concrete efforts to limit the capabilities of the model using both technical measures (fine-tuning and reducing the probability distribution it relies on for responding to prompts) and organisational/legal measures (developing instructions for use that dictate the specific tasks or domains that the model can be used for). Accordingly, the model, at least according to the modifying entity, is no longer general-purpose.
However, in practice, this is moot. Foundation models are evidently hard to control (see my previous post on AI alignment potentially being impossible). Therefore, it is not realistic for a model to be fine-tuned to the extent that it completely loses its general-purpose nature; it can be jailbroken and therefore prompted to perform tasks outside its intended domain. But it might perform poorly on those other tasks.
In late April, the Commission published, for consultation, guidelines to clarify the scope of the obligations of providers of general-purpose AI models in the AI Act. The Commission has stressed that this is a working document for consultation and that the response to the consultation "will provide important input to the Commission when preparing the guidelines."5 We can only therefore treat this document as the Commission's initial thoughts, but it nevertheless provides some interesting insights on how it interprets the GPAI provisions in the AI Act.
Section 3.2 answers the critical question for this post: Who is the provider of a general-purpose AI model, and when is a downstream modifier a provider?
If Entity A develops a general-purpose AI model and, on or outside the Union market, makes it available to Entity DM, who modifies the model...and places the modified model on the market, then:
- Entity A is the provider of the original model and must comply with the obligations for providers of general-purpose AI models, and
- Entity DM is the provider of the modified model (“downstream modifier”...) and must comply with the obligations for providers of general- purpose AI models, unless the modified model is not a general-purpose AI model.6
So the Commission is leaving both options open (seemingly). In its view, it seems to be possible to apply modifications that rid GPAI models of their general-purpose nature, therefore no longer making them a GPAI model. But reading the Commission's guidelines further it states the following:
...the AI Office deems that not every modification of a general-purpose AI model should lead to the downstream modifier being considered as a provider of a general-purpose AI model who is subject to the obligations laid down in Articles 52 to 55 AI Act. Instead, the AI Office deems that only those modifications that have a significant bearing on the rationales behind the obligations for providers of general-purpose AI models in the AI Act should lead to the downstream modifier being considered as a provider of a general-purpose AI model for the purposes of the respective obligations. For instance, when it comes to general-purpose AI models with systemic risk, only modifications that lead to a significant change in systemic risk should lead to downstream modifiers being considered as providers of general-purpose AI models with systemic risk.7
This is interesting. The Commission does believe that there are modifications to GPAI models that result in the GPAI provisions not being applicable. But this view is not based on the idea that GPAI models can be modified so as to rid of their general-purpose nature, and therefore the GPAI provisions no longer being relevant. Rather, the Commission believes that there are modifications which "have a significant bearing on the rationales behind" the GPAI provisions, meaning that only certain modifications trigger those provisions. To that end, the Commission provides some 'thresholds' that determine whether the entity is provider of a GPAI model or not. They do this for both GPAI models with and without systemic risk.
A downstream modifier of a GPAI model is a provider of a GPAI model without systemic risk if the computational resources used to modify the model is greater than a third of the training compute threshold for the original model. This is justified on the basis that this amount of compute suggests that this will result in "a significant change in properties and behaviour" of the model, including " a significant change in generality and capabilities as compared to the original model."8 Such level of compute also suggests that a "significant amount of data" was used to modify the GPAI model.9
If classed as a provider of a GPAI model, the obligations are limited to the modification carried out. This means that the documentation only needs to relate to the modification.10
Separately, an entity becomes a provider of a GPAI model with systemic risk if either one of the following applies:
The original GPAI model is one with systemic risk and the amount of compute used to modify the model is greater than a third of the training compute threshold for original model, set under Article 51.2 of the Act at 10 FLOPs. This condition is justified by the Commission on the basis that this amount of compute suggests that "the modified model can be expected to present significantly changed systemic risk compared to the original model."11
The original model is not a GPAI model with systemic risk and the downstream modifier (a) knows or can reasonably be expected to know that the cumulative amount of compute used to train the original model, and (b) the sum of the training compute and the modification compute is greater than 10 FLOPs (as specified under Article 51.2 of the Act). This condition is justified by the Commission on the basis that the original GPAI model "would not have undergone any systemic risk assessment or mitigation" and " therefore any systemic risks presented by the modified model will not have been assessed and mitigated by the provider of the original model."12
If an entity meets any of the above conditions, and therefore becomes a provider of a GPAI model with systemic risk, then its obligations are not limited to the modification of the model. Instead:
...the systemic risk assessment and mitigation required by Article 55(1) AI Act should be conducted anew for the modified model, taking account any available information about the original model. The provider also has to notify the Commission in accordance with Article 52(1) AI Act.13
In aggregate, according to the Commission's current view, if a company takes a foundation model and fine-tunes it with its own data to produce a new model designed for a specific use case, then:
It is possible for the new model to become an AI model, though the Commission does not specify exactly when this is the case
If the new model remains a GPAI model, the GPAI model provisions under the AI Act only apply if a certain amount of compute is used for the modification of the model
It will be interesting, after the consultation, to see what the final guidelines say on this issue.
EU AI Act, Article 3.63.
EU AI Act, Recital (99).
EU AI Act, Article 3.66.
EU AI Act, Recital (97).
European Commission, Targeted consultation in preparation of the Commission Guidelines to Clarify the Scope of the Obligations of Providers of General-Purpose AI Models in the AI Act (April 2025), p.10.
European Commission, Targeted consultation in preparation of the Commission Guidelines to Clarify the Scope of the Obligations of Providers of General-Purpose AI Models in the AI Act (April 2025), p.10.
European Commission, Targeted consultation in preparation of the Commission Guidelines to Clarify the Scope of the Obligations of Providers of General-Purpose AI Models in the AI Act (April 2025), p.10.