EU Artificial Intelligence Act

November 8, 2023

EU Artificial Intelligence Act

In April 2021, the European Commission proposed the first EU regulatory framework for artificial intelligence (AI). The proposed AI act is the first-ever attempt to enact a horizontal regulation for AI. Now the EU lawmakers are starting negotiations to finalize the new regulations, with substantial amendments to the Commission’s proposal including revising the definition of AI systems, broadening the list of prohibited AI systems, and imposing obligations on general purpose AI and generative AI models such as ChatGPT[1].

The role of AI and the importance of AI management

AI can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. The use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.

At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and fundamental rights that are protected by EU law, especially the right to privacy and right to freedom of expression. Thus, it is crucial to issue an Act on AI to manage the development and use of AI, and prevent the harmful impact of AI on human rights and public interest.

In the EU, some Member States are already considering national rules to ensure that AI is safe and is developed and used in compliance with fundamental rights and obligations, which will likely lead to a fragmentation of the internal market on essential elements regarding in particular the requirements for the AI products and services, their marketing, their use, the liability and the supervision by public authorities. This problem can be best solved through EU harmonizing legislation in the EU AI Act.

Introduction of proposed EU Artificial Intelligence Act

Objectives

The European Commission puts forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives:

  • Ensure that AI systems placed and used in the Union market are safe and respect existing laws on fundamental rights and Union values;
  • Ensure legal certainty to facilitate investment and innovation in AI;
  • Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
  • Facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

Legal basis

The legal basis for the proposal is the Treaty on the Functioning of the European Union (TFEU), and in particular Articles 16 and 114.

Article 114 provides for the adoption of measures to ensure the establishment and functioning of the internal market. The primary objective of the proposal is to ensure the proper functioning of the internal market by setting harmonized rules in particular on the development, placing on the Union market and the use of products and services making use of AI technologies or provided as stand-alone AI systems.

Article 16 provides for the right of everyone to the protection of personal data concerning them. The proposal contains certain specific rules on the protection of individuals with regard to the processing of personal data, notably restrictions on the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement.

Subject matter

The AI Act lays down:

  • Harmonized rules for placing on the market, putting into service, and using AI systems in the Union;
  • Prohibitions of certain artificial intelligence practices;
  • Specific requirements for high-risk AI systems and obligations for operators of such systems;
  • Harmonized transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio, or video content;
  • Rules on market monitoring and surveillance.

Scope

The AI Act applies to:

  • Providers placing on the market or putting into service AI systems in the Union;
  • Users of AI systems located within the Union;
  • Providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union.

However, the draft Act does not apply to AI systems developed or used exclusively for military purposes, to public authorities in a third country, nor to international organizations, or authorities using AI systems in the framework of international agreements for law enforcement and judicial cooperation.

Risk-based approach

The proposed Act lays down a classification for AI systems with different requirements and obligations tailored to a risk-based approach. There are 4 risk levels of AI systems including  (i) unacceptable risk, (ii) high risk, (iii) limited risk, and (iv) low or minimal risk. AI systems with unacceptable risks would be prohibited. A wide range of high-risk AI systems would be authorized, but subject to a set of requirements and obligations to gain access to the EU market. AI systems presenting only limited risk would be subject to very light transparency obligations and low or minimal-risk AI systems do not require any additional obligations to be developed and used in the EU.

Some important provisions of the EU Artificial Intelligence Act

Definition of AI system

According to Clause 1 Article 3, an AI system is defined as software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. The definition of an AI system in the legal framework aims to be as technology-neutral and future-proof as possible, taking into account the fast technological and market developments related to AI.

Prohibited AI practices

Article 5 of the proposed AI act explicitly bans harmful AI practices that are considered to be a clear threat to people’s safety, livelihoods and rights, because of the unacceptable risk they create. Accordingly, it would be prohibited to place on the market, put into services or use in the EU:

  • AI systems that deploy harmful manipulative ‘subliminal techniques’;
  • AI systems that exploit specific vulnerable groups (people with physical or mental disabilities);
  • AI systems used by public authorities, or on their behalf, for social scoring purposes;
  • Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in a limited number of cases.

High-risk AI systems

Article 6 contains specific rules for AI systems that create a high risk to the health and safety or fundamental rights of natural persons. In line with a risk-based approach, those high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment.

All of these high-risk AI systems would be subject to a set of rules including: Providers of high-risk AI systems would be required to register their systems in an EU-wide database managed by the Commission before placing them on the market or putting them into service. Providers of AI systems not currently governed by EU legislation would have to conduct their own conformity assessment showing that they comply with the new requirements. High-risk AI systems would also have to comply with a range of requirements particularly on risk management, testing, technical robustness, data training and data governance, transparency, human oversight, and cybersecurity. In this regard, providers, importers, distributors, and users of high-risk AI systems would have to fulfill a range of obligations. Providers from outside the EU will be required to have an authorized representative in the EU, ensure the conformity assessment, establish a post-market monitoring system, and take corrective action as needed.

Transparency obligation for certain AI systems

Transparency obligations will apply for systems that:

  • Interact with humans;
  • Are used to detect emotions or determine association with (social) categories based on biometric data;
  • Generate or manipulate content (deep fakes).

When persons interact with an AI system or their emotions or characteristics are recognized through automated means, people must be informed of that circumstance. If an AI system is used to generate or manipulate image, audio or video content that appreciably resembles authentic content, there should be an obligation to disclose that the content is generated through automated means, subject to exceptions for legitimate purposes (law enforcement, freedom of expression). This allows persons to make informed choices or step back from a given situation.

Measures in support of innovation

Title V contributes to the objective of creating a legal framework that is innovation-friendly, future-proof and resilient to disruption. To that end, it encourages national competent authorities to set up regulatory sandboxes and sets a basic framework in terms of governance, supervision and liability. AI regulatory sandboxes establish a controlled environment to test innovative technologies for a limited time on the basis of a testing plan agreed with the competent authorities. Title V also contains measures to reduce the regulatory burden on small and medium enterprises and start-ups.

Governance and implementation

At the Union level, the proposal establishes a European Artificial Intelligence Board (the ‘Board’), composed of representatives from the Member States and the Commission. The Board will facilitate a smooth, effective, and harmonized implementation of this regulation by contributing to the effective cooperation of the national supervisory authorities and the Commission and providing advice and expertise to the Commission.

At the national level, Member States will have to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the regulation.

PrivacyCompliance provides solutions related to ensuring compliance with personal data regulations, assessing the impacts of personal data processing, drafting impact assessment dossiers, and cross-border data transfer dossiers.

PrivacyCompliance

#AI #EU #AIregulation #ChatGPT #Generative #InteractiveAI #AIsystems #EUAIact

[1] https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf


Privacy Compliance

VIETNAM AI HANDBOOK – SECOND EDITION

Vietnam AI Handbook – Second Edition In January 2024, PrivacyCompliance published the first version of the AI Handbook which was received warmly by the AI community and the general public. Since then, there have been many developments in the AI scene around the world such as new AI applications, and new regulations, with the most […]

Learn more

Privacy Compliance

Layered Notice – A Robust Demonstration Of Transparency

Layered Notice – A Robust Demonstration Of Transparency One of the fundamental principles for Personal Data Controllers is the unwavering commitment to transparency vis-à-vis data subjects. In their pursuit to address this requirement, Controllers have opted to issue lengthy Privacy Notices, aiming for comprehensive disclosure to relevant data subjects. However, the question arises: Does this […]

Learn more

Privacy Compliance

THE FIRST AI HANDBOOK IN VIETNAM

The First AI Handbook in Vietnam Dear Colleagues, Partners, and Friends, Mindful of the significant advancements in artificial intelligence (AI) in recent times, Privacy Compliance has undertaken a project aimed at updating our clientele, partners, and the general public on the prevailing state of AI globally and, more specifically, in Vietnam. With great pride, we […]

Learn more