
2025 was supposed to be the year of agents.
Generative AI (genAI), central to the current AI hype cycle, represents a significant leap from the previous generation of AI. It took AI from systems that could recognise patterns and make predictions on a narrow set of tasks to systems that are general-purpose and capable of completing a wider variety of tasks via natural language. This is what the advent of ChatGPT and other LLM-based chatbots represent.
But using these AI models to power chat interfaces that users prompt has turned out to just be merely one of the many systems that could be built. In 2025, we have seen the growth and proliferation of ‘agents’ - systems made up of AI models and other components which are capable, to a certain degree, of perceiving and acting upon its environment.1
The use of the term agents could be controversial, as it attributes a degree of sentience to AI models that has not necessarily been proven. Nevertheless, these models, when built in a certain way, can be capable of completing tasks with a reliability that the models of yesteryears could never get close to.
The range of use cases involving AI agents continues to expand; conducting research, managing customer complaints, managing inboxes, writing code and much more. Some of these use cases may be more promising than others, but the move towards building agentic systems is undeniable at this point.
And the growth of agentic AI represents arguably the next leap in AI capabilities, one that could have great economic and societal impacts.
Yet, despite the prolificness of this newer type of AI systems, it is questionable whether the EU’s AI regulation actually regulates it.
The AI Act applies to a definition of AI systems that make reference to factors like ‘autonomy’, ‘adaptiveness’ and ‘influencing environments.’ Even so, the European Commission seems to have doubts about whether AI systems fall within the scope of the Act.

In this post, I attempt to explore this conundrum and explore whether agentic AI falls within the scope of the AI Act. I do so by covering:
What agentic AI is and how it differs from other forms of AI engineering
The material scope of the AI Act
Whether agentic AI falls within the Act’s material scope
What this debate reveals about governing AI
What is agentic AI?
As I mentioned before, agentic AI refers to AI systems comprising of AI models and other components which are capable, to a certain degree, of perceiving and acting upon its environment. For an agent to operate in its environment and achieve the goal set, it needs a set of actions to follow and a set of tools to use.
A very simplistic version of this is using ChatGPT to conduct a web search. For example, we could give ChatGPT the following prompt:
Find me YouTube videos that explain how to build AI agentsThe goal here is for ChatGPT to find videos on building AI agents. The actions it therefore needs to take consists of accessing YouTube and finding videos that match the query (’building AI agents’) and report back on the videos that best match the query. The environment it is operating in is the internet, or more specifically YouTube. The tool it used for this task was the web browsing function built into ChatGPT.
This is a very simple example of an agent because ChatGPT is taking an input, identifying the goal and the steps required to achieve the goal and using the right tools to execute the goal. And this happens all in a single run.
But AI agents can do a bit more than this.
Agentic AI is about engaging a feedback loop that shapes how the agent behaves when completing a given task. This feedback loop enables the agent to complete more complex tasks using multiple runs. In other words, based on a given input, the AI agent can form a thought, take an action using certain tools, observe the output and then take the next action based on that output. This loop repeats until the agent can verify that a final response to the task has been produced.

AI agents are therefore capable of managing the use of different data sources and tools when responding to inputs to complete multi-step tasks. This makes agents dynamic, capable of adjusting its plan based on new information.
Model context protocols (MCPs) are also highly relevant here. MCPs are like USB plugs for the AI models, connecting it with information and tools outside its environment that it can utilise to complete a given task. It provides a standardised means for gaining further capabilities.
Very simply, MCPs work like this:
Servers expose resources and tools
Based on the input received from the user, the model makes requests to the server via the app such as
read this dataorcall this toolThe app enforces the permissions and includes the server outputs in the model’s context
To give an example, let’s say you have a customer support agent and you ask it to do the following:
Summarise this week’s support tickets and draft repliesThe agent’s response may look something like the following:
Interpret the prompt. The agent identifies the two parts to the task: (a) summarise the support tickets and (b) draft replies. It also identifies the required outputs: (a) a summary report and (b) draft response messages.
Gather data. The agent uses an MCP client to connect to external systems (databases, ticketing system, CRM etc) via MCP servers. The server may expose the ticketing system database (including certain fields like
ticket ID,customer name,issue description,status, priority,response time), from which the agent may query:Get all tickets from date X to date Y. The MCP client receives the response from the MCP server and adds it to agent’s context.Process data. The agent filters the tickets in scope, categorises issues (e.g., billing vs technical), computes metrics (counts, average resolution time) and conducts sentiment analysis on the ticket text.
Summarise the tickets. Using the processed data, the agent produces a summary.
Draft replies. The agent drafts replies to the tickets and, using the MCP, queues those replies in the ticketing system.
This workflow is what makes AI agents ‘agentic’. By combining the reasoning capabilities of AI models with a MCP to access the necessary tools and data, agents can complete complex tasks that require multiple steps with minimal input from human users.
What is the material scope of the AI Act?
The purpose of the AI Act is to provide harmonised rules that “promote the uptake of human-centric and trustworthy artificial intelligence (AI).” The rules are also designed to ensure a high level of protection of health, safety and fundamental rights against “the harmful affects of AI systems.”2
So what does the Act consider to be an ‘AI system’ and which rules apply to which systems?
Definition of AI system
The Act provides the following definition for an AI system:
...a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.3
With this definition, we can identify several key components, which include:
Autonomy
Adaptiveness
Inferring from inputs
Influencing environments
Autonomy
The Act anticipates that AI systems will exhibit “some degree of independence of actions from human involvement” and have the capability to operate “without human intervention.”4 Accordingly, systems that operate with full manual human involvement and interventions fall outside the scope of the Act.5
Adaptiveness
Adaptiveness refers to AI systems with self-learning capabilities, which enables “the system to change while in use.”6 However, note the use of the word ‘may’ in the definition of AI system when describing adaptiveness, meaning that this factor is not decisive. Not all systems need to have self-learning capabilities.7
Inferring from inputs
In the context of machine learning, ‘interference’ refers to using the trained model to generate new outputs. This is what you do every time you prompt ChatGPT, Claude or any other chatbot.
Accordingly, under the AI Act, interference means “the process of obtaining the outputs...and to a capability of AI systems to derive models or algorithms, or both, from inputs or data.” The Act considers this to be a key “characteristic of AI systems.”8
In essence, inference refers to the mechanisms used by the system to turn inputs into outputs. Those mechanisms, which are developed during model training and testing, can include supervised learning, unsupervised learning, reinforcement learning and deep learning.9
Influencing environments
AI systems are not passive - they impact the environments that they operate in. This includes physical environments (a robot arm picking up an object) or virtual environments (executing data flows or digital operations like web browsing).
General purpose AI
The AI Act also recognises what it calls ‘general-purpose’ AI (GPAI). The provisions on GPAI were added to the draft of the Act after the release of ChatGPT in 2022 to include within the scope of the Act the models that power these generative AI systems.
The Act makes a distinction between GPAI models and GPAI systems:
A GPAI model is an AI model trained with a large amount of data, displays significant generality, can perform a wide range of distinct tasks and can be integrated into a variety of downstream systems.10
A GPAI system is an AI system based on a GPAI model that can be used for a variety of purposes and can be used directly or integrated with other AI systems.11
This can be more easily explained with an example: GPT-5 is a GPAI model whereas ChatGPT is a GPAI system equipped with GPT-5 (and other models) packaged together with a chatbot UI, content moderation pipelines and other tools that make up the product.
Risk levels
Under the AI Act, AI systems fall into either one of several risk levels. These risk levels impose different kinds of obligations on providers and deployers of these systems.
I will not go into too much detail in this post about these different risk levels, but the categories include the following:
Prohibited AI practices, such as systems that negatively influence the behaviour of individuals in a manner that rids them of control over their choices (e.g., dark patterns)
High-risk AI systems, such as systems used for assessing a person’s creditworthiness
Limited-risk AI systems, which includes systems that users can interact with directly, systems capable of generating synthetic content and systems used for emotion recognition and biometric categorisation
Low-risk AI systems, which includes any system that does not fall within any of the other above risk categories.
High-risk AI systems are subject to the most onerous obligations under the Act, and it is therefore the type of system that the Act is most concerned with.
So does agentic AI fall within the scope of the AI Act?
On its face, it would seem that the answer is quite simple: AI agents fall within the scope of the AI Act.
AI agents definitely possess a high level of autonomy, capable of adapting while in use, can make several inferences from its inputs (in the form of multi-step reasoning) and can influence their environment through tool use and access to other resources. Agentic AI would seem to meet the definition of ‘AI system’ provided under the AI Act.
But apparently not so, maybe. Back in September, Sergey Lagodinsky, a member of the European Parliament, wrote a letter to the European Commission regarding AI agents and the AI Act. Lagodinsky’s letter highlighted three main issues with AI agents:
Their ability to operate autonomously without human oversight
Their ability to interact with external systems
Their potential to be misused by bad actors
The third issue highlighted by Lagodinsky could apply to any form of AI system. Though perhaps the broader point he is making here is that the potential damage that could be done with AI agents by bad actors is much higher given what they could be capable of. But this points to a broader issue with AI regulation and the decentralised nature of AI development - lots of different people can get access to AI models and systems, and some of those people will use AI for malicious purposes. How do you manage this activity so that risks are minimised and benefits are maximised? Not easy.
Apart from this, Lagodinsky raises some interesting points about autonomy and interacting with external systems.
On autonomy, the definition of ‘AI systems’ in the AI Act excludes full manual human involvement. But if autonomy exists on a spectrum, then AI agents would sit on the other end whereby human involvement is very little. If the Act anticipates AI systems will have at least “some” autonomy, then surely it would also includes systems with “a lot of” autonomy. How could higher levels of autonomy exercised by AI agents take them outside of the Act’s remit?
But this point on autonomy connects with the second point made by Lagodinsky, which is the fact that AI agents can interact with other systems autonomously. It could be argued that the definition of AI system assumes a human-in-the-loop at all points of the system’s operation. This means that simple, single-turn input-and-output prompting of AI systems are certainly in scope. However, are AI agents still in scope if they are capable of executing tasks with multiple steps in which there is no human verification yet those steps may involve uses of tools and resources that impact the agent’s wider environment?
To give an example, I have been trying to use Claude Skills to carry out vendor reviews based on the requirements under the GDPR. For one of the test runs, Claude kept running into an issue with the JavaScript code it was producing, seemingly going around in circles trying to fix it.
When Claude was attempting to correct itself, I was not telling it what to do. I just let it do its thing. But in doing so, is the system still an AI system in manner envisaged by the AI Act, or is this system operating with so much autonomy, using tools to influence its environment, that it goes beyond what the EU anticipated? If the definition of an ‘AI system’ does really only include single-turn input-and-output prompting of AI systems, then presumably AI agents do not fall within its scope.
And this brings us to the European Commission’s response to Lagodinsky’s letter, which it provided in November. The Commission’s letter made the following points:
AI agents are not a distinct category in the Act
The Act’s rules extend to agentic AI systems and general-purpose AI models they are built upon
The risk-based framework applies “to the extent that AI agents are AI systems,” a phrasing that some could interpret as suggesting the Commission is open to the possibility that some agents might fall outside the scope of the Act’s primary risk-based framework
The Commission’s third point here is what could spark some doubt about agentic AI being regulated by the AI Act. The phrase “to the extent that AI agents are AI systems” would suggest that there are forms of agentic AI that are not AI systems. There could be a few ways to interpret this.
Firstly, it could be the case that the Commission is simply highlighting the possibility that some uses of AI agents could be classed as “low-risk”. In other words, there could be use cases that are not prohibited, nor high-risk nor limited risk, and therefore the obligations under the Act do not apply.
Alternatively, one could interpret the Commission’s framing here as suggesting that due to the inherent nature of AI agents, such forms of AI may actually fall outside the scope of the AI Act. This might be because of the high levels of autonomy that such systems possess, or for some other reason. Whatever the rationale, the Commission did not elaborate.
There were some further points made by the Commission in its letter to Lagodinsky:
The most relevant provisions, according to the Commission, include the bans on harmful manipulation and exploitation of vulnerabilities. This is in addition to the requirement for chatbots and similar systems to make clear to humans that they are interacting with a machine.
The level of autonomy or tool use of a GPAI model could “be decisive in its designation as having systemic risk”.
The list of high-risk use cases can be updated by the Commission and it will therefore “closely monitor the development of AI agents and consider further action as needed”.
Commission recently issued a technical assistance contract which includes a section dedicated to assessing the safety and security of AI agents.
What does this all mean?
Drafting legislation is not easy.
On the one hand, the rules cannot be too close to the state-of-the-art that they become redundant as soon as that state-of-the-art changes. On the other hand, the rules cannot so abstract that they are too difficult to apply in practice.
Ideally, for AI regulation, you want rules that would cover the different evolutions that AI could take. The AI Act does do this to an extent, with it mainly taking a risk-based approach.
But the fact that the EU’s flagship AI regulation may not even cover the latest iteration of this technology could be a huge problem. It is yet another issue with the AI Act that the Commission will need to address, and further calls into question what the true effectiveness of this legislation will be if it ever fully comes into force.
Chip Huyen, AI Engineering: Building Applications with Foundation Models (O’Reilly Media 2024), pp.276-277.
EU AI Act, Article 1.1.
EU AI Act, Article 3.1.
EU AI Act, Recital (12).
European Commission Guidelines, 6 February 2025, p.3.
EU AI Act, Recital (12).
European Commission Guidelines, 6 February 2025, p.4.
EU AI Act, Recital (12).
European Commission Guidelines, 6 February 2025, pp.6-8.
EU AI Act, Article 3.63.
EU AI Act, Article 3.66.







"As I mentioned before, agentic AI refers to AI systems comprising of AI models and other components which are capable, to a certain degree, of perceiving and acting upon its environment. For an agent to operate in its environment and achieve the goal set, it needs a set of actions to follow and a set of tools to use."
Really clean break down. there seems to be a lot of confusion on what an Agent even is.
Extremely timely as I am seeking to define the term and controls within our business, thank you