New draft directives on product liability and liability of AI systems
On September 28, 2022, the European Commission published a “package” of two new proposals of directives aiming at harmonizing liability rules for the digital environment. One draft directive deals specifically with the liability issues generated by the use of Artificial Intelligence (AI) systems (the “AI Liability Directive” or “AILD”), the other one revises the strict liability rules for defective products and will repeal the existing Product Liability Directive 85/374/EEC (or “PLD”).
For the Commission, this package of complementary rules aims to promote trust in new technologies, in particular AI, by ensuring that injured persons are effectively compensated for any potential damage linked to those technologies and products. The two proposals will now be discussed by the European Parliament and in the Council. Once adopted, the Member States will have to implement the new rules into national law (within two years after the entry into force of the AILD or one year after the new PLD is in force).
The Product Liability Directive
This draft Directive contains rules on the liability of economic operators for damage suffered by natural persons caused by defective products. The draft updates and modernizes the existing rules on strict liability for manufacturers of defective products, defined as products that do not provide the safety expected by the general public. This is done by, among other things:
– Taking into account the modification of products after their putting on the market, their interaction with other products, the safety-relevant cybersecurity requirements or the specific expectations of the end-users (art. 6);
– Making clear that software providers and providers of digital services that affect how the product works, along with hardware manufacturers, can also be held liable;
– Allowing compensation of personal injury and damage to personal property caused by AI, robots, or other smart systems or data loss caused by such unsafe products, through the inclusion of AI systems and AI-enabled goods in the definition of “products” (art. 4);
– Treating the operators which substantially modify some products already on the market as manufacturers for the liability regime (art. 7);
– Making it also possible to seek compensation from the importers or the fulfilment service providers (such as those offering warehousing, packaging or dispatching services) for products manufactured outside the EU (art. 7);
– Reinforcing the position of the claimants for compensation by ensuring a court procedure for ordering the disclosure of relevant evidence (art. 8 will have to be balanced with the necessary protection of trade secrets under the 2016/943 Directive); and by
– Facilitating the burden of proof with a rebuttable presumption of defectiveness of the product in certain circumstances – for example if the defendant fails to comply with an obligation to disclose relevant evidence at its disposal or the product does not comply with mandatory safety requirements – (art. 9), while taking into account some exemptions in favor of the potentially liable operators (art. 10).
Although the Product Liability Directive does not affect other rules, such as special liability systems, the GDPR of the future AI Liability Directive, it clearly has links with the AI Act, as well as with the EU rules on cybersecurity, and it might well clash with the EU rules on trade secrets.
The AI Liability Directive
For the Commission the current national liability rules, based on fault and causality, are not suited for handling liability claims for AI-related damages. Fault-based legal processes often require the victim to prove a wrongful action by a person, which can be difficult and costly, in particular where the damage is caused by an AI system. The autonomy, complexity and sometimes opacity of those systems make it more complicated for the injured person to establish the causal link between a fault and the relevant damage, so, to make AI trustworthy, the draft AILD facilitates the burden of proof for the victims of harms related to AI systems (which are not necessarily products in the sense of the draft PLD and thus do not fall within its scope). The AILD covers claims for damages when the damage is caused by an output (or the failure to produce an output) by an AI system due to the fault of a person, e.g. the provider or user (under the AI Act definitions). The draft Directive does not regulate the definition of fault or causality, the different types of damages that can be compensated, the distribution of liability over multiple tortfeasors, or contributory conduct of the calculation of damages – which all remain issues subject to national law. It does aim to facilitate access to information during the legal processes for the injured party and to alleviate the burden of proof by:
– Ensuring that court proceedings allow claimants to obtain an order for disclosure of evidence (art. 3);
– Providing for a rebuttable presumption of a causal link between the fault of the defendant and the output of the AI system fault, if certain conditions are met (art. 4).
The relation between the two directives and with other legislation
The two proposals use similar language and mainly focus on the facilitation of proof, with a right to impose the disclosure of evidence and the provision of rebuttable presumptions. There are, however, a few differences: the PLD covers the no-fault liability claims against the manufacturers (or other operators) for the putting on the market of defective products, while the AILD eases the liability of making claims based on the fault of not only the providers, but also the users of the AI systems. In addition, AI systems are not (necessarily) products in the sense of the PLD. Thus, the claims brought by victims under the two different directives are intended not to overlap.
The two new proposals complement several existing or forthcoming EU legislation, such as the AI Act. The new rules favoring individual claims complement the ex ante requirements, such as the conformity assessments and other preventive measures, required by the future AI Act, whose definitions of (high-risk) AI systems, provider and user of those systems are incorporated in the AILD.
The new rules will be of guidance for businesses, in particular when software and AI tools are used. It remains to be seen whether the whole legislative package will result in increased trust in AI.
Facing a liability question? Do not hesitate to contact us should you require further information on this issue, or any other data related matter.