In a previous article, we provided an operational overview of deployer obligations under the AI Act, the Commission’s position on AI agents and the official timeline for transparency compliance. That post included a summary of the obligations under Article 26 and a six-step operational framework.

This article takes the next step: it translates each paragraph of Article 26 of Regulation (EU) 2024/1689 into a structured checklist — obligation by obligation, action by action — identifying the responsible function within the organisation, the documentation to be produced and the coordination points with related obligations under the GDPR. The checklist also covers the closely connected obligations under Article 27 (Fundamental Rights Impact Assessment) and Article 49(3) (registration in the EU database), which are operationally inseparable from the deployer’s compliance framework.

The objective is to provide deployers of high-risk AI systems with a practical tool, ahead of the application date of 2 August 2026 for Annex III systems.

Scope: who needs this checklist

This checklist applies exclusively to deployers — as defined by Article 3(4) of the AI Act — who use AI systems classified as high-risk under Annex III of the Regulation. It does not apply to users of minimal-risk systems, nor to transparency obligations under Article 50 (which we address separately). For an overview of the risk classification and its practical implications, see our operational overview.

The deployer checklist: Art. 26 AI Act and connected obligations

1. Use in accordance with instructions (Art. 26(1))

Obligation. The deployer must take appropriate technical and organisational measures to ensure that it uses the high-risk AI system in accordance with the instructions for use accompanying the system, as provided by the provider pursuant to Article 13.

Concrete actions:

  • Obtain, archive and distribute internally the instructions for use provided by the provider.
  • Verify that actual use aligns with the intended purpose declared by the provider. Any use outside the intended purpose may trigger role transformation under Article 25.
  • Define technical and organisational measures that make verifiable the conformity of actual use with the instructions — for example, access controls, usage protocols, periodic audits of how the system is being used.
  • Establish a documented process for reviewing instructions whenever the provider issues updates.

Responsible function: Compliance officer or the operational unit managing the AI system.

Documentation: Archived copy of instructions for use; internal use policy aligned with intended purpose; description of technical and organisational measures adopted; record of any updates received.

2. Human oversight (Art. 26(2))

Obligation. The deployer must assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support.

Concrete actions:

  • Formally designate the person(s) responsible for human oversight by name, role and organisational position.
  • Verify that the designated persons have received adequate training on the specific system (not generic AI training).
  • Ensure that oversight personnel have the authority to override, suspend or discontinue the system’s operation.
  • Document reporting lines: the oversight person must have direct access to decision-makers who can act on their assessments.

Responsible function: Senior management (for designation); HR or training unit (for competence verification).

Documentation: Formal appointment letter or internal act; training records; description of authority and escalation procedures.

Coordination. This obligation intersects with the AI literacy requirement under Article 4, which has been applicable since 2 February 2025. AI literacy is the baseline requirement; human oversight competence under Article 26(2) is a higher, system-specific standard.

3. Control over input data (Art. 26(4))

Obligation. To the extent the deployer exercises control over input data, it must ensure that such data is relevant and sufficiently representative in view of the intended purpose of the system.

Concrete actions:

  • Identify whether the deployer has any control over the input data (as opposed to the data being determined entirely by the system architecture).
  • Where control exists, define data quality criteria aligned with the intended purpose.
  • Implement checks or sampling procedures to verify data relevance and representativeness on an ongoing basis.

Responsible function: Data management or IT unit, in coordination with the operational unit using the system.

Documentation: Data quality policy for the specific system; records of periodic checks; documentation of any data corrections.

Coordination. Where the input data includes personal data, the data quality obligation operates alongside the accuracy principle under Article 5(1)(d) GDPR.

4. Monitoring, risk reporting and suspension of use (Art. 26(5))

Obligation. The deployer must monitor the operation of the high-risk AI system on the basis of the instructions for use. Where the deployer has reasons to consider that the use in accordance with the instructions may result in the AI system presenting a risk within the meaning of Article 79(1), it must, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and suspend the use of the system. Where the deployer identifies any serious incident, it must immediately inform first the provider and then the importer or distributor and the relevant market surveillance authority.

Concrete actions:

  • Establish a monitoring protocol consistent with the instructions for use.
  • Define thresholds and criteria for escalation: what constitutes a risk within the meaning of Article 79(1), what constitutes a serious incident.
  • Identify in advance the contact points: provider, importer or distributor (where applicable), national market surveillance authority (in Italy, within the national framework, ACN is identified as market surveillance authority, without prejudice to the possible relevance of sector-specific authorities in specific contexts).
  • Implement a reporting procedure with defined timelines for internal escalation and external notification.
  • Include in the procedure an immediate suspension mechanism: if a risk within the meaning of Article 79(1) is identified, the system must be taken out of operation pending resolution.

Responsible function: Operational unit managing the system (first-line monitoring); compliance officer (escalation, reporting and suspension decision).

Documentation: Monitoring protocol; incident reporting and suspension procedure; record of incidents, notifications and suspensions; contact register.

5. Log retention (Art. 26(6))

Obligation. The deployer must keep the logs automatically generated by the high-risk AI system, to the extent such logs are under their control, for a period appropriate to the intended purpose of the system, of at least six months, unless provided otherwise in applicable Union or national law.

Concrete actions:

  • Verify whether the system generates automatic logs and whether such logs are accessible to the deployer.
  • Establish a log retention policy: minimum six months, or longer if required by sector-specific legislation.
  • Ensure that logs are stored securely and are retrievable for inspection by competent authorities.
  • Coordinate log retention with GDPR data retention limits where logs contain personal data.

Responsible function: IT or data management unit.

Documentation: Log retention policy; technical description of log storage and access; records demonstrating compliance with retention period.

Coordination. This is a critical intersection with GDPR. Logs may contain personal data (including inferences or decisions about individuals). The retention period under Article 26(6) must be reconciled with the storage limitation principle under Article 5(1)(e) GDPR. In my view, the AI Act obligation provides a legal basis for the minimum retention period, but the deployer must still ensure that retention beyond what is necessary is justified.

6. Worker information (Art. 26(7))

Obligation. Before putting into service or using a high-risk AI system in the workplace, deployers who are employers must inform workers’ representatives and the affected workers that they will be subject to the use of the system.

Concrete actions:

  • Identify all workplace contexts where the high-risk system will affect workers (directly or through decisions informed by the system).
  • Prepare a clear, accessible information notice for workers and their representatives.
  • Deliver the information before the system is put into service — not after.
  • Document the delivery and, where applicable, any consultation with workers’ representatives.

Responsible function: HR, in coordination with legal and the operational unit.

Documentation: Information notice; proof of delivery (date, recipients); record of any consultation with workers’ representatives.

Coordination. Where applicable, this obligation operates alongside national labour law requirements on the introduction of monitoring technologies in the workplace. In Italy, the provisions of Article 4 of Law No. 300/1970 (Statuto dei Lavoratori) on remote monitoring remain applicable.

7. Registration and prohibition of use for public deployers (Art. 26(8))

Obligation. Deployers that are public authorities, or Union institutions, bodies, offices or agencies, must use the high-risk AI system registered in the EU database under Article 71, unless the system is exempt from registration. If the system is not registered, such deployers must not use it and must inform the provider or distributor accordingly.

Concrete actions:

  • Determine whether the deployer falls within the scope of Article 26(8) (public authority, or Union institution, body, office or agency).
  • Verify that the high-risk AI system is duly registered in the EU database before deployment.
  • If the system is not registered: refrain from using the system and formally notify the provider or distributor.
  • Establish an internal verification step — a gate control before any new high-risk system is put into service by a public deployer.

Responsible function: Compliance officer; procurement or IT unit (for pre-deployment verification).

Documentation: Record of registration verification; where applicable, formal communication to provider/distributor regarding non-registered systems.

8. Use of provider information for the DPIA (Art. 26(9))

Obligation. Where the deployer is required to carry out a data protection impact assessment (DPIA) under Article 35 GDPR or Article 27 of Directive (EU) 2016/680 (law enforcement data protection), it must use the information provided by the provider under Article 13 of the AI Act to comply with that obligation.

Concrete actions:

  • Determine whether a DPIA is required (high-risk AI systems processing personal data will almost always trigger this threshold).
  • Obtain and use the technical documentation and instructions for use provided by the provider as input for the DPIA.
  • Integrate AI-specific risks (bias, opacity, automated decision-making) into the DPIA methodology.
  • If the provider’s information is insufficient to complete a meaningful DPIA, document this gap and request additional information from the provider.

Responsible function: DPO (for DPIA methodology and coordination); operational unit (for engaging with the provider).

Documentation: DPIA report; record of provider information used; where applicable, documentation of information gaps and requests to the provider.

Coordination. The DPIA under Article 35 GDPR and the Fundamental Rights Impact Assessment (FRIA) under Article 27 of the AI Act are distinct obligations with different scopes and different triggering conditions. The DPIA focuses on data protection risks; the FRIA covers a broader range of fundamental rights. Where both are required, the FRIA complements the DPIA — it does not replace it. We will analyse the relationship between these two instruments in detail in a forthcoming article.

9. Fundamental Rights Impact Assessment (Art. 27)

Obligation. Before putting a high-risk AI system referred to in Article 6(2) into use, deployers must carry out a Fundamental Rights Impact Assessment (FRIA) if they are: (a) bodies governed by public law; (b) private entities providing public services; or (c) deployers of systems referred to in Annex III, points 5(b) (creditworthiness assessment) and 5(c) (risk assessment and pricing in life and health insurance). An exception applies for systems referred to in Annex III, point 2 (critical infrastructure).

Concrete actions:

  • Determine whether the deployer falls within the scope of Article 27 — noting that the subjective scope is different from that of Art. 26(8) and Art. 49(3).
  • Identify the categories of persons and groups likely to be affected.
  • Assess the specific risks to fundamental rights in the specific context of use.
  • Identify measures for risk mitigation, including human oversight arrangements.
  • Notify the results of the assessment to the relevant market surveillance authority (Art. 27(3)).
  • Where some profiles of risk are already covered by a DPIA under Article 35 GDPR, the FRIA complements the analysis — it does not replace or absorb it.

Responsible function: Legal unit or compliance officer, in coordination with the operational unit and, where applicable, the DPO.

Documentation: FRIA report; notification to market surveillance authority (Art. 27(3)); record of mitigation measures adopted.

10. Special rule for law enforcement deployers: post-remote biometric identification (Art. 26(10))

Obligation. Deployers of high-risk AI systems for post-remote biometric identification in the area of law enforcement are subject to specific additional requirements. They must seek authorisation — either prior to or within 48 hours of use — from a judicial authority or an independent administrative authority. If authorisation is refused, the deployer must immediately stop the use and delete the data and results linked to that use. Each use must be documented, and the deployer must submit annual reports to the relevant national market surveillance authority and the national data protection authority, covering the number and circumstances of uses, the type of system used and the authorisations obtained or refused.

Concrete actions:

  • Determine whether the high-risk system falls within the category of post-remote biometric identification used for law enforcement purposes.
  • Where applicable, establish a procedure for requesting authorisation from the competent judicial or independent administrative authority — either in advance or within 48 hours.
  • Implement an immediate stop-and-delete mechanism if authorisation is refused.
  • Document each individual use of the system with all relevant circumstances.
  • Prepare and submit annual reports to the market surveillance authority and the data protection authority.

Responsible function: Law enforcement unit using the system; legal unit (for authorisation requests); DPO (for annual reporting to the DPA).

Documentation: Record of each use; authorisation requests and decisions; deletion records where authorisation is refused; annual reports.

Coordination. This obligation operates alongside the requirements of Directive (EU) 2016/680 (Law Enforcement Directive) on the processing of personal data for law enforcement purposes.

11. Information to affected persons (Art. 26(11))

Obligation. Deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons must inform those persons that they are subject to the use of the high-risk AI system.

Concrete actions:

  • Identify all decisions (or decision-support processes) where the high-risk system is involved.
  • Prepare a clear, accessible notice to inform the affected persons.
  • Ensure the information is provided proactively — not only upon request.
  • Where applicable, coordinate with transparency obligations under Article 50.

Responsible function: Operational unit managing the system; legal unit (for content of the notice).

Documentation: Information notice; record of delivery method and timing.

Coordination. This obligation operates alongside the information obligations under Articles 13–14 GDPR (information to be provided to the data subject) and the right not to be subject to automated decision-making under Article 22 GDPR.

12. Registration in the EU database by deployers (Art. 49(3))

Obligation. Deployers that are public authorities, Union institutions, bodies, offices or agencies must register the use of the high-risk AI system in the EU database established under Article 71 — with the exception of systems referred to in Annex III, point 2 (critical infrastructure). This obligation applies to public deployers and persons acting on their behalf; it does not extend to all private entities providing public services (unlike the FRIA under Art. 27).

Concrete actions:

  • Determine whether the deployer falls within the scope of the registration obligation under Art. 49(3), which is narrower than the scope of Art. 27.
  • Prepare the information required for registration.
  • Complete the registration before putting the system into service.

Responsible function: Compliance officer.

Documentation: Registration confirmation; record of information submitted.

13. Cooperation with authorities (Art. 26(12))

Obligation. The deployer must cooperate with the relevant competent authorities in any action those authorities take in relation to the high-risk AI system.

Concrete actions:

  • Identify in advance the competent national authorities (in Italy, within the national framework, ACN is identified as market surveillance authority and AgID as notifying authority, without prejudice to the possible relevance of sector-specific authorities in specific contexts).
  • Establish an internal point of contact for regulatory inquiries.
  • Ensure that relevant documentation (logs, FRIA, DPIA, monitoring records) is accessible and producible upon request.

Responsible function: Compliance officer; legal unit.

Documentation: Contact register of competent authorities; internal procedure for responding to regulatory requests.

Summary table

ProvisionObligationKey actionsResponsibleMain documentation
ARTICLE 26 — DEPLOYER OBLIGATIONS
Art. 26(1)Use per instructions (TOMs)Archive instructions; align use with intended purpose; define technical and organisational measures; review upon updatesCompliance / OperationsInstructions archive; use policy; TOMs description
Art. 26(2)Human oversightDesignate, train, empower oversight persons; ensure authority to override/suspendSenior management / HRAppointment act; training records
Art. 26(4)Input data qualityDefine quality criteria; implement periodic checks on relevance and representativenessData management / ITData quality policy; check records
Art. 26(5)Monitoring, reporting & suspensionMonitoring protocol; escalation thresholds; immediate suspension mechanism if risk under Art. 79(1); authority contactsOperations / ComplianceMonitoring protocol; incident log; suspension records
Art. 26(6)Log retentionRetention policy (min. 6 months); secure storage; coordinate with GDPR storage limitationIT / Data managementRetention policy; storage description
Art. 26(7)Worker informationInformation notice to workers; delivery before deployment; document delivery and consultationHR / LegalNotice; proof of delivery
Art. 26(8)Registration gate (public deployers)Verify system is registered in EU database before use; if not registered: do not use and notify provider/distributorCompliance / ProcurementVerification record; communication to provider
Art. 26(9)Use of provider info for DPIAObtain and use provider documentation (Art. 13) for DPIA (Art. 35 GDPR); document information gapsDPO / OperationsDPIA report
Art. 26(10)Post-remote biometric ID (law enforcement)Authorisation (prior or within 48h); stop-and-delete if refused; document each use; annual reports to MSA and DPALaw enforcement / Legal / DPOUse records; authorisation decisions; annual reports
Art. 26(11)Information to affected personsIdentify decisions involving the system; prepare clear notice; deliver proactivelyOperations / LegalInformation notice
Art. 26(12)Cooperation with authoritiesIdentify competent national authorities; designate internal contact point; ensure documentation accessible on requestCompliance / LegalContact register; response procedure
CONNECTED OBLIGATIONS
Art. 27Fundamental Rights Impact AssessmentScope: public bodies, private entities providing public services, credit/insurance deployers (Annex III 5(b)(c)), excl. Annex III pt. 2. Assess fundamental rights risks; notify MSA (Art. 27(3)). FRIA complements DPIA — does not replace itLegal / Compliance / DPOFRIA report; notification to authority
Art. 49(3)EU database registration (deployers)Scope: public authorities, Union institutions/bodies/agencies and persons acting on their behalf — narrower than Art. 27. Excl. Annex III pt. 2. Register before deploymentComplianceRegistration confirmation

Connected obligations (highlighted) are not part of Art. 26 but are operationally inseparable from the deployer's compliance framework. This table is intended as a general guide and does not replace case-by-case legal qualification.

Timeline and priorities

The obligations under Article 26 become applicable on 2 August 2026 for high-risk systems listed in Annex III. The current law remains that of the AI Act as published: the Digital Omnibus on AI, currently at the trilogue stage, envisages a postponement to 2 December 2027 for stand-alone Annex III systems and 2 August 2028 for systems embedded in regulated products, but the legislative process is not yet concluded and the text has not been adopted. Organisations should plan on the basis of the current timeline — 2 August 2026 for the general application of high-risk obligations, 2 August 2027 for high-risk AI systems that are safety components of products covered by EU harmonisation legislation (Annex I) — while monitoring the legislative process.

However, the AI literacy obligation under Article 4, which is a precondition for effective human oversight under Article 26(2), has been applicable since 2 February 2025. This means that deployers should already be investing in competence development — not waiting for August 2026.

The recommended sequence for implementation is:

  1. Now: Classify AI systems; ensure AI literacy (Art. 4); begin training oversight personnel.
  2. Q2 2026: Conduct DPIA and FRIA where required; prepare information notices; establish monitoring and suspension protocols.
  3. Before 2 August 2026: Complete registration in EU database (where applicable); finalise log retention policies; formalise all documentation.

Downloadable checklist

A PDF version of this checklist — structured by obligation, with space for tracking compliance status — is available for download.

Download the checklist (PDF)

Concluding note

Article 26 of the AI Act is often summarised in a few bullet points. In practice, each obligation requires internal coordination, documented processes and, in several cases, active dialogue with the provider. The checklist presented here is not a substitute for case-by-case legal analysis, which depends on the specific system, the context of deployment and the applicable sector-specific legislation. It is, however, a starting point for structured preparation.

Regulatory precision is not an academic luxury. It is a professional responsibility.


For an operational overview of deployer obligations, AI agents and transparency, see: AI Act: Deployers, AI Agents and Transparency Obligations — The State of Play in Spring 2026.

The blog’s interactive Hubs provide decision-support tools for identifying your role and mapping your obligations: