As we announced in our latest post on LinkedIn on 11/02/2025, the European Commission published the Commission work programme 2025 by which withdrawn the “Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on adapting non-contractual civil liability rules to artificial intelligence (AI Liability)” (Annex IV: Withdrawals - n. 32) with the following Reasons for withdrawal:

➡️ «No foreseeable agreement - the Commission will assess whether another proposal should be tabled or another type of approach should be chosen.».

📍 Thus, at the moment, on the AI liability, it remains the “Proposal for a Directive of the European Parliament and of the Council on liability for defective products (Product Liability Directive - PLD Proposal)” only.

We have already deepened the topic in one of our latest papers and reported some crucial points here.

From the data protection perspective, the European Data Protection Supervisor (EDPS) views expressed in EDPS Opinion 42/2023 on the Proposal for two Directives on AI liability rules are known.

The topic is discussed because, from a legal point of view, it would not be permissible to recognize a machine’s responsibility. Indeed, liability is attributable to a human being, but legal systems do not regulate the liability of machines or artificial intelligence systems.

Artificial intelligence systems operate and behave as they have always been designed and programmed by human beings.

We do not rule out the possibility of artificial intelligence systems or machines being designed and programmed by other machines.

Suppose the issue of liability for AI systems can be addressed without particular constraints at the European level due to its characteristic supranationality. In that case, some critical issues might be vital in national areas where the nature of the legal system regulates liability.

Should the abovementioned directive proposal be approved, individual member states would be obligated to transpose them. At the transposition stage, the national legislator will have to consider the legal nature of the institution of liability, its location, and the rationale of the regulation.

The introduction at the national level of a liability system (of IA) or a machine could conflict not only with the existing discipline but precisely with the legal qualification of the liability itself, which could be incompatible—by its nature and underlying rationale—with the legal system. All this would entail a profound change in civil and criminal law’s entire institution of liability.

Furthermore, someone raises the opportunity to leave the liability to vendors and distributors.

Indeed, the European Commission always considered accountability in legislative processes, borrowing concepts from the Common Law systems. These are two legally different concepts. In truth, in the Common Law systems, there is a third assumption of liability: responsibility.

In contrast, civil law systems know only one type of responsibility.

The GDPR is an example of this because it explicitly calls for accountability, precisely leaving the burden of compliance with EU Regulation 2016/679 on the data controller and data processor.

Thus, it is possible, but the choices are in the direction of a policy that identifies the solutions it deems legally most appropriate and the consequent need to evaluate the standardization processes.


If this resource was helpful, you could contribute by

Buy me a coffee

Or donate via

Liberapay


Follow us on Mastodon

Stay tuned!