In the age of artificial intelligence (“AI”), many legal questions arise around the compensation for damage caused by defective products. For example, an autonomous car can wrongly identify an object on the road and cause an accident, or a surgical malfunctioning robot can injure a patient. On 28 September 2022, the European Commission proposed two complementary draft directives to adapt existing liability rules to new digital technologies. This blogpost explores some of the key provisions of these initiatives.
The proposal for a revised Product Liability Directive (“the revised PLD”)
First, the European Commission proposed a revised PLD, which tweaks the almost 40 years old existing PLD. The revised PLD modernises the existing strict product liability regime (also known as no-fault liability, i.e. irrespective of fault), which applies to claims against the manufacturer for damage caused by defective products.
Some of the key features under the revised PLD are the following:
- the definition of a “product” is explicitly expanded to include new types of products, such as:
- intangible items (software or AI systems) and digital services that affect how the product works (e.g. navigation services in autonomous vehicles). When for example products like robots are defect due to the lack of software updates under the manufacturer’s control, the new rules allow for reimbursement.
- Products in the circular economy, such as refurbished machinery or equipment, are now also covered under the revised PLD.
- the notion of damage has been extended: not only damage that caused death, personal injury or damage to property but also psychological health and loss or corruption of data can be taken into account.
- presumption of causality and right to disclosure of evidence are foreseen as well, comparable to the new AI Liability Directive.
The proposal for a new AI Liability Directive (“AILD”)
Secondly, the European Commission published a proposal for a new AI Liability Directive, which aims to reform national fault-based liability rules. Under this regime, the person issuing a claim must prove:
- the fault, wrongful conduct or omission;
- the damage and
- the causal link to bring a successful claim.
The new AILD introduces for the first time rules specific to damages caused by AI, for which the providers of AI systems (and in some cases, the user of AI systems) could be held liable.
Some of the key provisions in this AILD are:
- the so called ‘presumption of causality’, which will relieve victims from having to explain in detail how the damage was caused by a certain fault or omission. For example, by providing the proof that someone (an operator of drones) was at fault for not complying with a certain obligation relevant to the harm (not respecting the instructions of use), the court will assume under certain circumstances that this non-compliance caused damage.
- the access to evidence from companies or suppliers, which is made easier for victims. Victims will also be able to ask the court to order disclosure of information in certain circumstances, subject to appropriate safeguards to protect sensitive information such as trade secrets.
Entering into force
The two proposals of the European Commission have a different regime and thus can complement each other and the AI Act (which we discussed in our previous blogpost). Both proposals are currently being represented to the European Parliament and the Council, which will consider the two proposals of the European Commission to be adopted. The Directives will then have to be transposed to Belgian law in order to be applicable.