In recent years, artificial intelligence (AI) has been a popular buzzword and a hot topic that has caught the attention of lawmakers. For several years, the European Commission (EC) has been researching the topic further and on 21 April 2021 it has published the long awaited Proposal for a Regulation of Artificial Intelligence (hereinafter the Proposed Regulation) on the basis of the White Paper on Artificial Intelligence that was published in 2020. At the same time, the EC proposed a new Machinery Regulation, designed to ensure the safe integration of AI systems into machinery.
BROAD SCOPE OF APPLICATION
The Proposed Regulation applies to a wide range of actors, including:
- providers placing on the market or putting into service AI systems in the EU (irrespective of whether they are established in the EU or in a third country (i.e., outside of the EU);
- users of AI systems established within the EU, under whose authority and responsibility the AI system is used; and
- EU institutions, offices, bodies, and agencies when they are providers or users of AI systems.
The broad scope of application is extended even more by the extraterritorial effect of the Proposed Regulation, meaning that actors (providers and users) established in a third country are also subject to the Proposed Regulation to the extent the AI systems affect persons located in the EU. Hence, the Proposed Regulation is likely to have a similar effect on worldwide AI regulation as the GDPR has had on the worldwide development of data protection regulation.
PROHIBITION OF CERTAIN AI PRACTICES
The Proposed Regulation prohibits certain AI practices that are considered a clear threat to the safety, livelihoods, and rights of people, including:
- AI systems that manipulate human behaviour, opinions or decisions through choice architectures or other elements of a user interface or exploiting information or prediction about an individual or group of individuals to target their vulnerabilities or special circumstances. In each case, these practices will be in scope where it results in a person to behave or take a decision to their detriment;
- AI systems used for indiscriminate surveillance applied in a generalised manner to all natural persons without differentiation. This may include the monitoring or tracking of individuals through direct interception or gaining access to communication, location, meta data or other personal data collected in a physical or virtual environment where it is performed on a large-scale; and
- AI systems that evaluate or classify the trustworthiness of natural persons based on their social behaviour or known or predicted personal or personality characteristics, leading to detrimental or unfavourable treatment.
The above prohibition does not apply when such practices are authorised by law and are carried out by public authorities or on behalf of public authorities to safeguard public security and subject to appropriate safeguards.
HIGH-RISK AI SYSTEMS
Furthermore, the Proposed Regulation includes several high-risk AI systems which are not prohibited but which use is subject to strict conditions. High-risk AI systems are listed in Annex II of the Proposed Regulation and include:
- AI systems used to dispatch or establish priority in the dispatching of emergency first response services;
- AI systems used to determine the access to education or vocational training;
- AI systems during the recruitment, promotion, or termination process;
- AI systems that evaluate the creditworthiness of persons;
- AI systems used by public authorities to evaluate the eligibility for public assistance benefits and services;
- AI systems used in a law enforcement context to prevent, investigate, detect, or prosecute a criminal offence or adopt measures impacting on the personal freedom of an individual or to predict the occurrence of crimes or events of social unrest with a view to allocate resources devoted to the patrolling and surveillance of the territory;
- AI systems used for immigration and border control, including to verify the authenticity of travel documentation and to examine asylum and visa applications; and
- AI systems intended to be used to assist judges at court.
The EC is empowered to adopt delegated acts to update the list in Annex II by adding new high-risk AI systems, where it has identified that other AI systems generate a high level of risk of harm in the same way as the high-risk AI systems already listed in Annex II.
Providers of high-risk AI systems must comply with many strict and detailed conditions, such as:
- use detailed and specific risk management systems and subject the system to a conformity assessment;
- only use high quality data that does not incorporate intentional or unintentional biases and is representative, free from errors and complete;
- conduct post-market monitoring of the operation of the system and notify any serious incident or malfunctioning to the relevant national regulator;
- register the system on a public register;
- keep records and logs, and be transparent to users about the use and operation of the system;
- ensure human oversight through appropriate technical and/or organizational measures; and
- ensure the robustness, accuracy, and security of the AI system.
Users of high-risk systems will be subject to more limited obligations. They must use that technology in accordance with the instructions for use and take appropriate technical and organisational measures to address risks created by the system. Users must also monitor the operation of the system and keep records of the description of the data used.
The Proposed Regulation further imposes specific obligations upon (i) authorised representatives of providers, (ii) importers, (iii) distributors of high-risk AI systems and (iv) other third parties involved in the AI value chain.
MEASURES IN SUPPORT OF INNOVATION
Interestingly and in line with the EC’s ambition to make the EU a worldwide leader in AI, while the Proposed Regulation aims at laying down a regulatory framework for AI, it also contains measures in support of innovation. Such measures include (i) AI regulatory sandboxing schemes, (ii) measures to reduce the regulatory burden for SMEs and start-ups and (iii) the establishment of digital hubs and testing experimentation facilities. These measures may also be a source of inspiration for a possible review of the GDPR.
SANCTIONS
Non-compliance with the rules laid down in the Proposed Regulation can give rise to GDPR-inspired administrative fines up to EUR 20,000,000 or, in the case of an undertaking, up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher. Also, like the GDPR, national supervisory authorities shall be competent for monitoring en enforcing compliance with the Proposed Regulation. A European Artificial Intelligence Board (EAIB), composed of representatives of the national supervisory authorities, a representative of the European Data Protection Supervisor (EDPS) and a representative from the EC, shall be established. The EAIB’s main task will be to supervise the consistent application of the Proposed Regulation.
NEXT STEPS
The Proposed Regulation is not yet applicable and is only the start of a lengthy legislative process where adaptations are still likely. Afterwards, it will be applicable one or two years after its adoption. Therefore, the applicability of this Proposed Regulation is not foreseen before 2024. Stakeholders are however advised to closely monitor the legislative process and take position on those issues that are of interest to them.