November 8, 2023
In April 2021, the European Commission proposed the first EU regulatory framework for artificial intelligence (AI). The proposed AI act is the first-ever attempt to enact a horizontal regulation for AI. Now the EU lawmakers are starting negotiations to finalize the new regulations, with substantial amendments to the Commission’s proposal including revising the definition of AI systems, broadening the list of prohibited AI systems, and imposing obligations on general purpose AI and generative AI models such as ChatGPT[1].
AI can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. The use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and fundamental rights that are protected by EU law, especially the right to privacy and right to freedom of expression. Thus, it is crucial to issue an Act on AI to manage the development and use of AI, and prevent the harmful impact of AI on human rights and public interest.
In the EU, some Member States are already considering national rules to ensure that AI is safe and is developed and used in compliance with fundamental rights and obligations, which will likely lead to a fragmentation of the internal market on essential elements regarding in particular the requirements for the AI products and services, their marketing, their use, the liability and the supervision by public authorities. This problem can be best solved through EU harmonizing legislation in the EU AI Act.
The European Commission puts forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives:
The legal basis for the proposal is the Treaty on the Functioning of the European Union (TFEU), and in particular Articles 16 and 114.
Article 114 provides for the adoption of measures to ensure the establishment and functioning of the internal market. The primary objective of the proposal is to ensure the proper functioning of the internal market by setting harmonized rules in particular on the development, placing on the Union market and the use of products and services making use of AI technologies or provided as stand-alone AI systems.
Article 16 provides for the right of everyone to the protection of personal data concerning them. The proposal contains certain specific rules on the protection of individuals with regard to the processing of personal data, notably restrictions on the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement.
The AI Act lays down:
The AI Act applies to:
However, the draft Act does not apply to AI systems developed or used exclusively for military purposes, to public authorities in a third country, nor to international organizations, or authorities using AI systems in the framework of international agreements for law enforcement and judicial cooperation.
Risk-based approach
The proposed Act lays down a classification for AI systems with different requirements and obligations tailored to a risk-based approach. There are 4 risk levels of AI systems including (i) unacceptable risk, (ii) high risk, (iii) limited risk, and (iv) low or minimal risk. AI systems with unacceptable risks would be prohibited. A wide range of high-risk AI systems would be authorized, but subject to a set of requirements and obligations to gain access to the EU market. AI systems presenting only limited risk would be subject to very light transparency obligations and low or minimal-risk AI systems do not require any additional obligations to be developed and used in the EU.
According to Clause 1 Article 3, an AI system is defined as software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. The definition of an AI system in the legal framework aims to be as technology-neutral and future-proof as possible, taking into account the fast technological and market developments related to AI.
Article 5 of the proposed AI act explicitly bans harmful AI practices that are considered to be a clear threat to people’s safety, livelihoods and rights, because of the unacceptable risk they create. Accordingly, it would be prohibited to place on the market, put into services or use in the EU:
Article 6 contains specific rules for AI systems that create a high risk to the health and safety or fundamental rights of natural persons. In line with a risk-based approach, those high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment.
All of these high-risk AI systems would be subject to a set of rules including: Providers of high-risk AI systems would be required to register their systems in an EU-wide database managed by the Commission before placing them on the market or putting them into service. Providers of AI systems not currently governed by EU legislation would have to conduct their own conformity assessment showing that they comply with the new requirements. High-risk AI systems would also have to comply with a range of requirements particularly on risk management, testing, technical robustness, data training and data governance, transparency, human oversight, and cybersecurity. In this regard, providers, importers, distributors, and users of high-risk AI systems would have to fulfill a range of obligations. Providers from outside the EU will be required to have an authorized representative in the EU, ensure the conformity assessment, establish a post-market monitoring system, and take corrective action as needed.
Transparency obligations will apply for systems that:
When persons interact with an AI system or their emotions or characteristics are recognized through automated means, people must be informed of that circumstance. If an AI system is used to generate or manipulate image, audio or video content that appreciably resembles authentic content, there should be an obligation to disclose that the content is generated through automated means, subject to exceptions for legitimate purposes (law enforcement, freedom of expression). This allows persons to make informed choices or step back from a given situation.
Title V contributes to the objective of creating a legal framework that is innovation-friendly, future-proof and resilient to disruption. To that end, it encourages national competent authorities to set up regulatory sandboxes and sets a basic framework in terms of governance, supervision and liability. AI regulatory sandboxes establish a controlled environment to test innovative technologies for a limited time on the basis of a testing plan agreed with the competent authorities. Title V also contains measures to reduce the regulatory burden on small and medium enterprises and start-ups.
At the Union level, the proposal establishes a European Artificial Intelligence Board (the ‘Board’), composed of representatives from the Member States and the Commission. The Board will facilitate a smooth, effective, and harmonized implementation of this regulation by contributing to the effective cooperation of the national supervisory authorities and the Commission and providing advice and expertise to the Commission.
At the national level, Member States will have to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the regulation.
PrivacyCompliance provides solutions related to ensuring compliance with personal data regulations, assessing the impacts of personal data processing, drafting impact assessment dossiers, and cross-border data transfer dossiers.
PrivacyCompliance
#AI #EU #AIregulation #ChatGPT #Generative #InteractiveAI #AIsystems #EUAIact
[1] https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf
Territorial Scope of GDPR In the modern world, data is flowing across borders at an unprecedented rate. This creates risks for the data since most laws are only effective within their respective borders and cannot guarantee adequate protection when the data is transferred abroad. It is for this reason that the General Data Protection Regulation […]
Learn more
Independent Supervisory Authorities Under GDPR The EU’s General Data Protection Regulation (“GDPR”) is an incredibly useful framework to protect personal data. However, all rules are only as good as our ability to enforce them, a legal framework alone cannot protect personal data. As such, independent enforcement agencies are required to put the regulations into practice. […]
Learn more
E-Privacy Directive The Directive 2002/58/EC or e-Privacy Directive (ePD) – also known as the Privacy and Electronic Communications Directive, is a regulatory framework established by the European Union (EU) to protect the privacy of individuals. With similar functions to the General Data Protection Regulation (GDPR), the ePD remains in effect alongside the GDPR with the […]
Learn more