In our previous analyses, we examined the obligations of deployers and the circumstances under which an organization may become an involuntary provider of AI agents under the AI Act. Those two articles addressed the question of who you are in the AI Act value chain. This article addresses a different but equally critical question: what rules govern the data that AI agents process.

AI agents do not merely generate text. They access emails, read documents, query databases, interact with external services, make decisions, and execute actions — often in multi-step chains with limited human intervention. Every one of these operations that involves personal data constitutes processing within the meaning of Article 4(2) of Regulation (EU) 2016/679 (GDPR).

The regulatory framework that applies to this processing is not solely the AI Act. The GDPR remains fully applicable, and its requirements — legal basis, transparency, data subject rights, restrictions on automated decision-making — do not yield to the AI Act but operate alongside it. For organizations deploying AI agents, compliance requires an integrated analysis of both frameworks.

Three scenarios to frame the analysis

Before examining the legal provisions in detail, it is useful to consider three concrete scenarios that illustrate how AI agents interact with personal data in practice. These scenarios will serve as reference points throughout the analysis.

Scenario A — AI agent in customer service

A company deploys an AI agent as the first point of contact for customer support. The agent accesses the company’s CRM, retrieves the customer’s order history, account details, and previous interactions, and proposes solutions — which may include issuing refunds, modifying orders, or scheduling callbacks. In some configurations, the agent executes these actions autonomously.

The personal data processed includes: name, email, telephone number, purchase history, payment references, and the content of the customer’s communications. If the customer describes a health condition, disability, or financial hardship, the agent may also encounter special categories of data.

Scenario B — AI agent in recruitment

A company integrates an AI agent into its recruitment workflow. The agent receives applications, analyses CVs, scores candidates against predefined criteria, and produces a ranked shortlist. Human recruiters review the shortlist, but the initial screening — which determines who is excluded — is performed entirely by the agent.

This scenario falls directly within the scope of Annex III of the AI Act: AI systems intended for the recruitment or selection of natural persons are classified as high-risk. At the same time, automated screening produces results that are significant for the candidates — their applications are accepted or rejected based on the agent’s assessment.

Scenario C — AI agent in professional services

A professional — a lawyer, consultant, or accountant — uses an AI agent to manage appointments, analyze client correspondence, and prepare preliminary responses. The agent accesses the professional’s email inbox and calendar, reads incoming messages, extracts relevant information, and drafts replies.

The critical point is that the personal data processed belongs to third parties — clients, counterparties, opposing counsel — who have no relationship whatsoever with the provider of the AI agent and have not been informed that an AI system is processing their data.

Who is the data controller?

The first operational question is: when an AI agent processes personal data, who bears the responsibilities of the data controller under Article 4(7) GDPR?

The answer depends on who determines the purposes and means of the processing. In most deployment scenarios, the organization that uses the AI agent — the entity that decides why and how the agent operates — is the data controller. The provider of the agent typically acts as a data processor within the meaning of Article 28, processing data on behalf of and under the instructions of the deploying organization.

This qualification is relatively straightforward in Scenario A: the company decides to deploy the agent in customer service, defines its scope of action, determines which data it can access, and what actions it can take. The company is the controller; the agent provider is the processor.

The analysis becomes more complex when the agent exercises a significant autonomy. If an agent autonomously determines which data to access, which sources to query, or which actions to take — beyond the instructions foreseen by the deploying organization — the question arises whether the agent provider, or even the agent itself as a system, is effectively co-determining the purposes and means of the processing. In such cases, the deploying organization must ensure that its contractual and technical arrangements with the provider clearly define the boundaries of the processing — and that those boundaries are enforceable.

The EDPB Guidelines 07/2020 on the concepts of controller and processor remain the reference framework for this analysis. They emphasize that the determination of roles must be based on the factual circumstances of the processing, not merely on the contractual designation.

In Scenario C, an additional layer of complexity arises: the professional is the controller of the data of their clients, but the introduction of an AI agent that accesses and processes those data may require an update to the data processing agreements, the privacy notices provided to clients, and — depending on the nature of the professional activity — compliance with sector-specific confidentiality obligations.

Article 6(1) GDPR requires that every processing operation be grounded in one of six legal bases. For AI agents, three are most relevant.

Consent must be freely given, specific, informed, and unambiguous (Art. 4(11)). In the context of AI agents, consent is problematic for several reasons.

An AI agent operating in a multi-step workflow may process data for purposes that emerge during the execution chain — purposes that were not foreseeable at the time consent was requested. The specificity requirement is difficult to satisfy when the agent’s actions are partially autonomous. Moreover, in Scenario B (recruitment), the candidate’s consent to the processing of their CV cannot be considered freely given if refusal would effectively prevent their application from being considered (Recital 43; EDPB Guidelines 05/2020).

Consent remains appropriate in limited, well-defined contexts — for example, when a customer explicitly agrees to interact with an AI agent for a specific purpose. But it is rarely a viable general-purpose legal basis for processing carried out through AI agents.

Performance of a contract — Art. 6(1)(b)

This basis applies when processing is necessary for the performance of a contract to which the data subject is a party. In Scenario A, a customer service agent who retrieves order information and processes a return is performing the contract between the company and the customer. The processing is necessary for that purpose.

However, the necessity requirement must be interpreted strictly (EDPB Guidelines 2/2019). Not every operation performed by the agent is necessary for the contract. If the agent uses customer data to train a model to improve its own performance or to feed an analytics pipeline, those are separate purposes that require separate legal bases.

Legitimate interest — Art. 6(1)(f)

In many business-to-business or organizational deployments, legitimate interest may constitute a relevant legal basis for AI agent processing — but only after a rigorous application of the three-part test and a careful assessment of the reasonable expectations of the data subjects involved. It requires a three-part test: (i) the controller pursues a legitimate interest; (ii) the processing is necessary to achieve that interest; (iii) the interests or fundamental rights and freedoms of the data subject do not override the controller’s interest.

In Scenario C, a professional who uses an AI agent to manage correspondence and prepare preliminary analyses has a legitimate interest in the efficient conduct of their professional activity. The processing is necessary for that purpose (manual processing of every email would be disproportionate). The key question is the third element: do the rights of the third parties whose data are processed — clients, counterparties — override that interest? The answer depends on the nature of the data, the reasonable expectations of the data subjects, and the safeguards in place (such as encryption, access controls, data minimization, and retention limits).

The Digital Omnibus proposal: Art. 88c. The European Commission has proposed a new Article 88c within the GDPR, which would explicitly confirm that the processing of personal data in the context of AI development and operation may be pursued based on legitimate interest under Article 6(1)(f), subject to specific safeguards: data minimisation during source selection and training, enhanced transparency, and an unconditional right for data subjects to object. The text is currently under negotiation in the Council (Antici Group on Simplification). It is not yet adopted law, and its final form may change significantly. However, its inclusion signals that, in the European institutional debate, legitimate interest is being considered as a potentially relevant legal basis for certain AI-related processing operations, without prejudice to the case-by-case assessment required under Art. 6(1)(f) GDPR.

Automated decision-making and Article 22

Article 22(1) GDPR provides that a data subject has the right not to be subject to a decision based solely on automated processing — including profiling — which produces legal effects concerning them or similarly significantly affects them.

AI agents may fall within the scope of this provision when their use results in decisions based solely on automated processing that produce legal effects or similarly significantly affect the data subject. Scenario B is the clearest case: an AI agent that screens CVs and produces a ranked shortlist may fall within the scope of Article 22 when the exclusion or selection of candidates depends in practice on an automated output that is not subject to effective and substantive human review. That outcome produces significant effects for the candidates.

The conditions for the application of Article 22 are:

  • The decision is based solely on automated processing: if a human recruiter meaningfully reviews and can override every individual decision, Art. 22 does not apply. But a human who merely rubber-stamps the agent’s output does not constitute meaningful human oversight.
  • The decision produces legal effects or similarly significantly affects the data subject: denial of a job application, refusal of credit, rejection of an insurance claim.
  • The exceptions under Art. 22(2) apply only if the decision is necessary for a contract, authorized by law, or based on explicit consent. In each case, suitable safeguards (Art. 22(3)) must be in place, including the right to obtain human intervention, to express a point of view, and to contest the decision.

In Scenario A, the analysis depends on the scope of the agent’s autonomy. If the agent merely proposes solutions that a human operator confirms, Art. 22 is not engaged. If the agent autonomously issues a refund, cancels an order, or denies a claim without meaningful human review, the provision applies.

Art. 22 and the Digital Omnibus. The European Commission had proposed amendments to Art. 22 to clarify the conditions for automated decision-making. The Presidency's compromise text has removed those amendments, maintaining the current regime. In the Council negotiations, several delegations have argued that the issue requires further clarification rather than deletion. At the same time, the EDPB and EDPS have stressed that Art. 22 should remain a prohibition with exceptions. The current text of Art. 22 GDPR remains unchanged — for now.

Transparency: two overlapping regimes

When an AI agent interacts with natural persons, the deploying organization must simultaneously satisfy two distinct sets of transparency obligations.

ObligationWhat it requiresLegal ref.Applicable from
AI interaction disclosureInform the person that they are interacting with an AI system — before or at the start of the interactionArt. 50(1) AI Act2 August 2026
Privacy notice — data collected from the data subjectProvide identity of the controller, purposes, legal basis, recipients, retention period, data subject rightsArt. 13 GDPRAlready in force
Privacy notice — data obtained from other sourcesSame as above, plus the source of the data and categories of data — within a reasonable period, max one monthArt. 14 GDPRAlready in force
Information on automated decision-makingInform the data subject of the existence of automated decision-making (incl. profiling), meaningful information about the logic involved, significance, and envisaged consequencesArt. 13(2)(f), Art. 14(2)(g) GDPRAlready in force
Deepfake and AI content labellingLabel AI-generated content and deepfakes, especially on matters of public interestArt. 50(4) AI Act2 August 2026

In Scenario C, the transparency challenge is particularly acute. The third parties whose data are processed by the professional’s AI agent — clients, counterparties — may need to be informed under the GDPR that an AI system is processing their data. It may be necessary to update privacy notices and, for data obtained indirectly, to verify the applicability of the obligations under Art. 14 GDPR and any available exemptions under Art. 14(5), taking into account sector-specific confidentiality obligations.

Special categories of data and AI agents

Article 9(1) GDPR prohibits the processing of special categories of personal data — including health data, biometric data, data revealing racial or ethnic origin, political opinions, religious beliefs, trade union membership, and data concerning sex life or sexual orientation — unless one of the exceptions in Article 9(2) applies.

AI agents that access unstructured data sources — such as emails, documents, and chat logs — may inadvertently process special categories of data. In Scenario A, a customer who describes a medical condition in a support request is providing health data. In Scenario B, a CV that includes a photograph, mentions of disabilities, or references to religious or community activities may contain data protected under Art. 9.

The deploying organization must implement technical and organizational measures to minimise the risk of processing special categories of data through AI agents, and must have a lawful basis under Article. 9(2) if such processing cannot be avoided.

The Digital Omnibus proposal: Art. 9(2)(k) and 9(5). The Commission proposes a new exception under Art. 9(2)(k) for the residual presence of special categories of data in AI training, testing, and validation datasets, subject to the conditions of a new paragraph 5: the controller must implement appropriate technical and organizational measures to avoid processing such data, and must remove them when identified. If removal would require disproportionate effort, the controller must effectively protect the data from use in outputs or disclosure to third parties. In the Council negotiations, several Member States have requested clarification on the scope of the exception, the notion of disproportionate effort, and the coordination with the general regime under Article 9 of the GDPR. This provision is under negotiation and is not yet adopted as law.

Operational table: GDPR obligations for organizations deploying AI agents

#ObligationWhat it entailsGDPR ref.AI Act ref.
1Identify and document the legal basisDetermine the applicable legal basis under Art. 6(1) for each category of processing performed by the AI agentArt. 6, 5(2)
2Qualify the roles in the processing chainDetermine who is controller and who is processor; formalize in a data processing agreement (Art. 28)Art. 4(7), 28Art. 25, 26
3Conduct a DPIAAssess the impact on the rights and freedoms of natural persons — mandatory for systematic and extensive profiling, large-scale processing of special categories, or automated decisions with legal effectsArt. 35Art. 27 (FRIA, where applicable — not equivalent to DPIA)
4Provide transparency and informationUpdate privacy notices to reflect the use of AI agents; disclose the AI interaction before or at the start of the interactionArt. 12, 13, 14Art. 50(1)
5Ensure human oversight for automated decisionsWhere the agent makes decisions with legal or similarly significant effects, ensure the right to human intervention, the right to express a point of view, and the right to contest the decisionArt. 22Art. 14, 26(2)
6Implement data minimisation and access controlsLimit the agent's access to the personal data strictly necessary for its task; implement role-based access controls and loggingArt. 5(1)(c), 25, 32Art. 15
7Address special categories of dataImplement measures to prevent inadvertent processing of Art. 9 data; if unavoidable, ensure a lawful exception under Art. 9(2)Art. 9
8Ensure data subject rightsGuarantee effective exercise of rights of access, rectification, erasure, portability, and objection — including for data processed by the agentArt. 15–21
9Assess international data transfersIf the AI agent provider processes data outside the EU/EEA, ensure an adequate transfer mechanism (adequacy decision, SCCs, Art. 49 derogations)Art. 44–49
10Document and maintain recordsInclude AI agent processing in the register of processing activities (Art. 30); for high-risk AI systems, retain logs for at least six months, if and to the extent that such logs are under the deployer's controlArt. 30Art. 26(6)

This table is intended as a practical orientation tool and does not replace case-by-case legal analysis.

The two frameworks are not alternatives

The AI Act regulates the system. The GDPR regulates the data that the system processes. The two frameworks overlap but do not substitute for each other: an organisation may be fully compliant with the AI Act — having classified its system, fulfilled transparency obligations, ensured human oversight — and still be in breach of the GDPR if it lacks a valid legal basis, fails to provide adequate information to data subjects, or does not respect the restrictions on automated decision-making.

For organizations deploying AI agents, the compliance exercise is necessarily integrated. The starting point is always the same: understand what data the agent accesses, why it accesses them, and what consequences they have for the individuals concerned. The answers to those questions determine the applicable obligations under both frameworks.

For the analytical framework on subject roles in the AI Act — provider, deployer, and the mechanisms of reclassification — see my paper on subject roles and the previous articles in this series on deployer obligations and the involuntary provider. For an interactive tool to map the intersections between the two frameworks, see the GDPR & AI Hub and the AI Act — GDPR Database.

Resources

Operational hubs

Tools and resources

Further reading

Concluding note

Regulatory precision is not an academic luxury — it is a professional responsibility. AI agents are powerful tools, but every tool that processes personal data operates within a legal framework that exists to protect fundamental rights. The question is not whether the GDPR applies to AI agents — it does, fully and without exception — but whether organizations deploying them have done the work to ensure that it is applied correctly.


This post is part of the series dedicated to the implementation of the AI Act and its intersection with the GDPR. For real-time updates, follow us on the blog and subscribe to the NicFab Newsletter.