In our previous analyses, we examined the obligations of deployers and the circumstances under which an organization may become an involuntary provider of AI agents under the AI Act. Those two articles addressed the question of who you are in the AI Act value chain. This article addresses a different but equally critical question: what rules govern the data that AI agents process.
AI agents do not merely generate text. They access emails, read documents, query databases, interact with external services, make decisions, and execute actions — often in multi-step chains with limited human intervention. Every one of these operations that involves personal data constitutes processing within the meaning of Article 4(2) of Regulation (EU) 2016/679 (GDPR).
The regulatory framework that applies to this processing is not solely the AI Act. The GDPR remains fully applicable, and its requirements — legal basis, transparency, data subject rights, restrictions on automated decision-making — do not yield to the AI Act but operate alongside it. For organizations deploying AI agents, compliance requires an integrated analysis of both frameworks.
Three scenarios to frame the analysis
Before examining the legal provisions in detail, it is useful to consider three concrete scenarios that illustrate how AI agents interact with personal data in practice. These scenarios will serve as reference points throughout the analysis.
Scenario A — AI agent in customer service
A company deploys an AI agent as the first point of contact for customer support. The agent accesses the company’s CRM, retrieves the customer’s order history, account details, and previous interactions, and proposes solutions — which may include issuing refunds, modifying orders, or scheduling callbacks. In some configurations, the agent executes these actions autonomously.
The personal data processed includes: name, email, telephone number, purchase history, payment references, and the content of the customer’s communications. If the customer describes a health condition, disability, or financial hardship, the agent may also encounter special categories of data.
Scenario B — AI agent in recruitment
A company integrates an AI agent into its recruitment workflow. The agent receives applications, analyses CVs, scores candidates against predefined criteria, and produces a ranked shortlist. Human recruiters review the shortlist, but the initial screening — which determines who is excluded — is performed entirely by the agent.
This scenario falls directly within the scope of Annex III of the AI Act: AI systems intended for the recruitment or selection of natural persons are classified as high-risk. At the same time, automated screening produces results that are significant for the candidates — their applications are accepted or rejected based on the agent’s assessment.
Scenario C — AI agent in professional services
A professional — a lawyer, consultant, or accountant — uses an AI agent to manage appointments, analyze client correspondence, and prepare preliminary responses. The agent accesses the professional’s email inbox and calendar, reads incoming messages, extracts relevant information, and drafts replies.
The critical point is that the personal data processed belongs to third parties — clients, counterparties, opposing counsel — who have no relationship whatsoever with the provider of the AI agent and have not been informed that an AI system is processing their data.
Who is the data controller?
The first operational question is: when an AI agent processes personal data, who bears the responsibilities of the data controller under Article 4(7) GDPR?
The answer depends on who determines the purposes and means of the processing. In most deployment scenarios, the organization that uses the AI agent — the entity that decides why and how the agent operates — is the data controller. The provider of the agent typically acts as a data processor within the meaning of Article 28, processing data on behalf of and under the instructions of the deploying organization.
This qualification is relatively straightforward in Scenario A: the company decides to deploy the agent in customer service, defines its scope of action, determines which data it can access, and what actions it can take. The company is the controller; the agent provider is the processor.
The analysis becomes more complex when the agent exercises a significant autonomy. If an agent autonomously determines which data to access, which sources to query, or which actions to take — beyond the instructions foreseen by the deploying organization — the question arises whether the agent provider, or even the agent itself as a system, is effectively co-determining the purposes and means of the processing. In such cases, the deploying organization must ensure that its contractual and technical arrangements with the provider clearly define the boundaries of the processing — and that those boundaries are enforceable.
The EDPB Guidelines 07/2020 on the concepts of controller and processor remain the reference framework for this analysis. They emphasize that the determination of roles must be based on the factual circumstances of the processing, not merely on the contractual designation.
In Scenario C, an additional layer of complexity arises: the professional is the controller of the data of their clients, but the introduction of an AI agent that accesses and processes those data may require an update to the data processing agreements, the privacy notices provided to clients, and — depending on the nature of the professional activity — compliance with sector-specific confidentiality obligations.
Which legal basis for the processing?
Article 6(1) GDPR requires that every processing operation be grounded in one of six legal bases. For AI agents, three are most relevant.
Consent — Art. 6(1)(a)
Consent must be freely given, specific, informed, and unambiguous (Art. 4(11)). In the context of AI agents, consent is problematic for several reasons.
An AI agent operating in a multi-step workflow may process data for purposes that emerge during the execution chain — purposes that were not foreseeable at the time consent was requested. The specificity requirement is difficult to satisfy when the agent’s actions are partially autonomous. Moreover, in Scenario B (recruitment), the candidate’s consent to the processing of their CV cannot be considered freely given if refusal would effectively prevent their application from being considered (Recital 43; EDPB Guidelines 05/2020).
Consent remains appropriate in limited, well-defined contexts — for example, when a customer explicitly agrees to interact with an AI agent for a specific purpose. But it is rarely a viable general-purpose legal basis for processing carried out through AI agents.
Performance of a contract — Art. 6(1)(b)
This basis applies when processing is necessary for the performance of a contract to which the data subject is a party. In Scenario A, a customer service agent who retrieves order information and processes a return is performing the contract between the company and the customer. The processing is necessary for that purpose.
However, the necessity requirement must be interpreted strictly (EDPB Guidelines 2/2019). Not every operation performed by the agent is necessary for the contract. If the agent uses customer data to train a model to improve its own performance or to feed an analytics pipeline, those are separate purposes that require separate legal bases.
Legitimate interest — Art. 6(1)(f)
In many business-to-business or organizational deployments, legitimate interest may constitute a relevant legal basis for AI agent processing — but only after a rigorous application of the three-part test and a careful assessment of the reasonable expectations of the data subjects involved. It requires a three-part test: (i) the controller pursues a legitimate interest; (ii) the processing is necessary to achieve that interest; (iii) the interests or fundamental rights and freedoms of the data subject do not override the controller’s interest.
In Scenario C, a professional who uses an AI agent to manage correspondence and prepare preliminary analyses has a legitimate interest in the efficient conduct of their professional activity. The processing is necessary for that purpose (manual processing of every email would be disproportionate). The key question is the third element: do the rights of the third parties whose data are processed — clients, counterparties — override that interest? The answer depends on the nature of the data, the reasonable expectations of the data subjects, and the safeguards in place (such as encryption, access controls, data minimization, and retention limits).
Automated decision-making and Article 22
Article 22(1) GDPR provides that a data subject has the right not to be subject to a decision based solely on automated processing — including profiling — which produces legal effects concerning them or similarly significantly affects them.
AI agents may fall within the scope of this provision when their use results in decisions based solely on automated processing that produce legal effects or similarly significantly affect the data subject. Scenario B is the clearest case: an AI agent that screens CVs and produces a ranked shortlist may fall within the scope of Article 22 when the exclusion or selection of candidates depends in practice on an automated output that is not subject to effective and substantive human review. That outcome produces significant effects for the candidates.
The conditions for the application of Article 22 are:
- The decision is based solely on automated processing: if a human recruiter meaningfully reviews and can override every individual decision, Art. 22 does not apply. But a human who merely rubber-stamps the agent’s output does not constitute meaningful human oversight.
- The decision produces legal effects or similarly significantly affects the data subject: denial of a job application, refusal of credit, rejection of an insurance claim.
- The exceptions under Art. 22(2) apply only if the decision is necessary for a contract, authorized by law, or based on explicit consent. In each case, suitable safeguards (Art. 22(3)) must be in place, including the right to obtain human intervention, to express a point of view, and to contest the decision.
In Scenario A, the analysis depends on the scope of the agent’s autonomy. If the agent merely proposes solutions that a human operator confirms, Art. 22 is not engaged. If the agent autonomously issues a refund, cancels an order, or denies a claim without meaningful human review, the provision applies.
Transparency: two overlapping regimes
When an AI agent interacts with natural persons, the deploying organization must simultaneously satisfy two distinct sets of transparency obligations.
| Obligation | What it requires | Legal ref. | Applicable from |
|---|---|---|---|
| AI interaction disclosure | Inform the person that they are interacting with an AI system — before or at the start of the interaction | Art. 50(1) AI Act | 2 August 2026 |
| Privacy notice — data collected from the data subject | Provide identity of the controller, purposes, legal basis, recipients, retention period, data subject rights | Art. 13 GDPR | Already in force |
| Privacy notice — data obtained from other sources | Same as above, plus the source of the data and categories of data — within a reasonable period, max one month | Art. 14 GDPR | Already in force |
| Information on automated decision-making | Inform the data subject of the existence of automated decision-making (incl. profiling), meaningful information about the logic involved, significance, and envisaged consequences | Art. 13(2)(f), Art. 14(2)(g) GDPR | Already in force |
| Deepfake and AI content labelling | Label AI-generated content and deepfakes, especially on matters of public interest | Art. 50(4) AI Act | 2 August 2026 |
In Scenario C, the transparency challenge is particularly acute. The third parties whose data are processed by the professional’s AI agent — clients, counterparties — may need to be informed under the GDPR that an AI system is processing their data. It may be necessary to update privacy notices and, for data obtained indirectly, to verify the applicability of the obligations under Art. 14 GDPR and any available exemptions under Art. 14(5), taking into account sector-specific confidentiality obligations.
Special categories of data and AI agents
Article 9(1) GDPR prohibits the processing of special categories of personal data — including health data, biometric data, data revealing racial or ethnic origin, political opinions, religious beliefs, trade union membership, and data concerning sex life or sexual orientation — unless one of the exceptions in Article 9(2) applies.
AI agents that access unstructured data sources — such as emails, documents, and chat logs — may inadvertently process special categories of data. In Scenario A, a customer who describes a medical condition in a support request is providing health data. In Scenario B, a CV that includes a photograph, mentions of disabilities, or references to religious or community activities may contain data protected under Art. 9.
The deploying organization must implement technical and organizational measures to minimise the risk of processing special categories of data through AI agents, and must have a lawful basis under Article. 9(2) if such processing cannot be avoided.
Operational table: GDPR obligations for organizations deploying AI agents
| # | Obligation | What it entails | GDPR ref. | AI Act ref. |
|---|---|---|---|---|
| 1 | Identify and document the legal basis | Determine the applicable legal basis under Art. 6(1) for each category of processing performed by the AI agent | Art. 6, 5(2) | — |
| 2 | Qualify the roles in the processing chain | Determine who is controller and who is processor; formalize in a data processing agreement (Art. 28) | Art. 4(7), 28 | Art. 25, 26 |
| 3 | Conduct a DPIA | Assess the impact on the rights and freedoms of natural persons — mandatory for systematic and extensive profiling, large-scale processing of special categories, or automated decisions with legal effects | Art. 35 | Art. 27 (FRIA, where applicable — not equivalent to DPIA) |
| 4 | Provide transparency and information | Update privacy notices to reflect the use of AI agents; disclose the AI interaction before or at the start of the interaction | Art. 12, 13, 14 | Art. 50(1) |
| 5 | Ensure human oversight for automated decisions | Where the agent makes decisions with legal or similarly significant effects, ensure the right to human intervention, the right to express a point of view, and the right to contest the decision | Art. 22 | Art. 14, 26(2) |
| 6 | Implement data minimisation and access controls | Limit the agent's access to the personal data strictly necessary for its task; implement role-based access controls and logging | Art. 5(1)(c), 25, 32 | Art. 15 |
| 7 | Address special categories of data | Implement measures to prevent inadvertent processing of Art. 9 data; if unavoidable, ensure a lawful exception under Art. 9(2) | Art. 9 | — |
| 8 | Ensure data subject rights | Guarantee effective exercise of rights of access, rectification, erasure, portability, and objection — including for data processed by the agent | Art. 15–21 | — |
| 9 | Assess international data transfers | If the AI agent provider processes data outside the EU/EEA, ensure an adequate transfer mechanism (adequacy decision, SCCs, Art. 49 derogations) | Art. 44–49 | — |
| 10 | Document and maintain records | Include AI agent processing in the register of processing activities (Art. 30); for high-risk AI systems, retain logs for at least six months, if and to the extent that such logs are under the deployer's control | Art. 30 | Art. 26(6) |
This table is intended as a practical orientation tool and does not replace case-by-case legal analysis.
The two frameworks are not alternatives
The AI Act regulates the system. The GDPR regulates the data that the system processes. The two frameworks overlap but do not substitute for each other: an organisation may be fully compliant with the AI Act — having classified its system, fulfilled transparency obligations, ensured human oversight — and still be in breach of the GDPR if it lacks a valid legal basis, fails to provide adequate information to data subjects, or does not respect the restrictions on automated decision-making.
For organizations deploying AI agents, the compliance exercise is necessarily integrated. The starting point is always the same: understand what data the agent accesses, why it accesses them, and what consequences they have for the individuals concerned. The answers to those questions determine the applicable obligations under both frameworks.
For the analytical framework on subject roles in the AI Act — provider, deployer, and the mechanisms of reclassification — see my paper on subject roles and the previous articles in this series on deployer obligations and the involuntary provider. For an interactive tool to map the intersections between the two frameworks, see the GDPR & AI Hub and the AI Act — GDPR Database.
Resources
Operational hubs
- AI Hub — Regulatory and operational guidance — the AI Act from a legal perspective: find your role, classify your system, see your obligations
- AI Act Hub — Operator roles and obligations — map of the six AI Act operators, role transformation mechanisms, obligation cascade
- GDPR & AI Hub — Operational intersections — legal bases for data processing in AI systems, automated decisions, DPIA, profiling
Tools and resources
- AI Act — GDPR Database — interactive database of correspondences between the AI Act and GDPR
- AI Compliance Tools — tools for AI compliance
- Self-Assessment Tool — self-assessment of AI Act obligations
- NicFab Newsletter — weekly update on privacy, data protection, AI, and cybersecurity (every Tuesday)
Further reading
- AI Act: deployers, AI agents and transparency — the deployer perspective and operational framework
- AI Agents: when a deployer becomes a provider — reclassification under Art. 25
- Video Conferencing and GDPR — a comparative analysis from the standpoint of data protection
- Digital Omnibus on AI: Parliament rewrites the Commission’s rules
Concluding note
Regulatory precision is not an academic luxury — it is a professional responsibility. AI agents are powerful tools, but every tool that processes personal data operates within a legal framework that exists to protect fundamental rights. The question is not whether the GDPR applies to AI agents — it does, fully and without exception — but whether organizations deploying them have done the work to ensure that it is applied correctly.
This post is part of the series dedicated to the implementation of the AI Act and its intersection with the GDPR. For real-time updates, follow us on the blog and subscribe to the NicFab Newsletter.
