Who are providers under the AI Act?
The entities subject to the most onerous obligations under the EU's AI regulation
TL;DR
This newsletter is about the concept of providers under the EU AI Act. It looks at how these entities are defined under the regulation, the criteria for becoming a provider of a high-risk AI system and the implications this has for AI engineers.
Here are the key takeaways:
Providers under the AI Act are essentially entities that build AI systems. Alternatively, a deployer is merely an entity that uses an AI system that has been developed by a provider.
It is possible for a deployer to be designated as a provider of a high-risk AI system depending on how they use an AI system developed by the original provider. This includes where a deployer:
Puts their name or trademark on a high-risk AI system
Makes a substantial modification to a high-risk system
Modifies the 'intended purpose' of an AI system in such a way that the AI system becomes a high-risk AI system
It is therefore possible for entities building on top of foundation models to become providers of high-risk AI systems if their development efforts involve any of the above. Those building with such models must therefore ensure that their use case is not considered high-risk under AI Act or that the model is not used for purposes beyond those permitted by the developer's use policy or services agreement.
The definition of a provider
The definition of a 'provider' under Article 3.3 Act is as follows:
...a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
Simply put, a provider is a developer or maker of an AI system. But let's break down this definition under in the Act:
"...a natural or legal person, public authority, agency or other body..." - This means that providers could be either an individual or an organisation. And such organisations could be private or public.
"...develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed..." - Clearly the provision captures those entities that build AI systems on their own volition as well as those entities that instruct others to build AI systems on their behalf. This means that one cannot escape the obligations of providers under the Act by simply outsourcing development to another entity.
"...places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge." - This means that the developer builds an AI system and makes it available in the EU or supplies it to another entity to use it. It does not matter whether the developer does this for free or for payment.
Alternatively, a deployer is the entity that uses an AI system. It is possible for the same entity to be both a provider (maker of the system) and a deployer (user of the system). However, an entity is not a deployer if they use the AI system for personal non-professional activity.
The distinction between provider and deployer is important as, under the AI Act, providers have more onerous obligations than deployers.
Providers of high risk AI systems
The bulk of the AI Act focuses on high-risk AI systems. The criteria for such systems under Article 6 is essentially two-tiered, whereby an AI system is high-risk if it is either:
Integrated into products regulated by specific sectoral product safety regulations as listed under Annex I
Used for performing certain activities in certain sectors listed under Annex III (such as biometric recognition systems, emotion recognition systems etc)
If an entity builds an AI system that meets either condition, then that entity will be classed as a provider of a high-risk AI system. However, it is also possible for an entity to become a provider of a high-risk AI system depending how they use or deploy an AI system.
The relevant provision here is Article 25. Under this provision, an entity (including a deployer), becomes a provider of a high risk system if that entity does at least one of the following:
Puts their name or trademark on a high-risk AI system
Makes a substantial modification to a high-risk system
Modifies the 'intended purpose' of an AI system in such a way that the AI system becomes a high-risk AI system as per Article 6
If an entity does any of the above, then it is essentially making a new high-risk AI system with the original high-risk AI system they were using or deploying and are therefore a provider of that new high-risk AI system.
Trademarking
If an entity deploys an AI system but makes it seem like it built the system, despite it being built by a separate entity, then it could become a provider. For example, if an entity puts their logo or other elements of their branding on the user interface for an AI system that they are using but have not developed themselves, it may be reasonable to presume that the logo indicates who the provider of the system is.
Guidance for this can be found from the interpretation of a 'producer' under the Product Liability Directive by the Court of Justice of the European Union. On this, the Court held that a person who presents themselves as a producer by putting their name, trademark, or any other "distinguishing feature" on a product is considered a producer of that product.1
Substantial modification
Article 3.23 defines a 'substantial modification' as a change to an AI system not foreseen by the provider that either affects compliance with the high-risk obligations under the Act or a modification to the intended purpose of the AI system.
Simply, a substantial modification includes any change to the AI system not envisaged by the original provider. Critical to this are the expectations of the original provider - is the modification within the realm of the original provider's intentions regarding how its system may be modified by others? If it is not, then the entity making the modifications will be a provider.
Intended purpose
Providers of high-risk AI systems must provide instructions for using their systems. These instructions for use are defined under Article 3.15 as "the information provided by the provider to inform the deployer of, in particular, an AI system’s intended purpose and proper use." Under Article 13, these instructions must include, among other things, the intended purpose of the system.
Under Article 3.12, 'intended purpose' is defined as:
...the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.
So if an entity (a) uses the system in a way that sits outside the intended purpose as envisaged by the original provider2 and (b) that use meets the Article 6 criteria for high-risk AI systems, then that entity will be a provider of a high-risk AI system.
Keep reading with a 7-day free trial
Subscribe to The Cyber Solicitor to keep reading this post and get 7 days of free access to the full post archives.