What is the Current Status and Outlook of AI Regulation in the EU? Explaining the Impact on Japanese Companies
With the advancement of AI, the use of AI tools such as ChatGPT in business operations is becoming increasingly common, and the business related to AI is flourishing. Concurrently, the issue of international regulation of AI is gaining attention.
In Japan, the Ministry of Economy, Trade and Industry has published the “Governance Guidelines for the Implementation of AI Principles Ver. 1.1[ja]” (as of the time of writing). On June 14, 2023 (Reiwa 5), the world’s first international “AI Regulation Act” was adopted by the European Parliament, which has also drawn significant interest in Japan.
This article will introduce the current state and future prospects of the AI Regulation Act, as well as explain its impact on Japanese companies.
What is the AI Regulation Act (AI Act)?
On June 14, 2023, the European Parliament adopted a comprehensive ‘Draft AI Regulation’ targeting the general use of AI. This is the world’s first international ‘AI Regulation Act (AI Act),’ consisting of 85 articles, and it represents a unified rule (secondary legislation) within the EU.
Going forward, informal negotiations (trilogues) involving three parties—the European Commission, the European Parliament, and the Council of the European Union—are expected to reach an agreement within 2023. Following the approval of the European Parliament and the Council of the European Union, which are the legislative bodies, the act is anticipated to be enacted and come into force as early as 2024.
The Legal System of the European Union
The legal system of the European Union (EU) consists of three main components: primary law (treaties), secondary law (community legislation), and case law.
Secondary law is based on primary law (treaties) and includes regulations and directives that directly or indirectly regulate member states within the EU, also known as EU law or derived law.
There are broadly five types of secondary law, but the “EU AI Act” falls under the category of a Regulation, which means it establishes a unified set of rules that directly bind all member states.
There are five main types of EU legislation (secondary law):
- Regulation: Binding on all member states and, upon adoption, becomes directly applicable without the need for ratification procedures, becoming part of the national legal systems.
- Directive: Member states are legally obligated to enact new legislation or amend existing laws to achieve the objectives set out by the directive.
- Decision: One form of legally binding law that is specific rather than general in its application, targeting particular member states, companies, or individuals.
- Recommendation: The European Commission advises governments, companies, or individuals within member states to take certain actions or measures. While not legally binding or enforceable, recommendations are intended to encourage the enactment or amendment of laws within the EU member states.
- Opinion: Also known as a “view,” it is an expression of the European Commission’s stance on a specific topic, without any legal binding force or compulsion.
Regulations are the most enforceable form of secondary law, with examples such as the GDPR (General Data Protection Regulation) illustrating this type of legislation.
Scope of Application of the AI Regulation Act
The European Union’s “AI Regulation Act” is directly applicable within the EU and also has extraterritorial application to third countries that are trading partners, carrying legal binding force. The regulated entities are businesses that introduce AI systems and services targeting the European market, including AI developers, deployers, providers, importers, distributors, and users.
The AI Regulation Act sets out clear requirements for specific AI systems and the obligations of businesses, while also seeking to reduce administrative and financial burdens on small and medium-sized enterprises (SMEs).
This legislative proposal is part of a broad AI package aimed at ensuring the safety and fundamental rights associated with AI, and at strengthening the approach to, investment in, and innovation of AI across the EU.
European regulations must align with the fundamental philosophy of the Treaty on the Functioning of the European Union. This means that human rights and freedoms within the EU must be guaranteed in the realm of AI technology, necessitating appropriate safeguards.
The objectives of the regulation are described as “to promote the use of trustworthy AI under human oversight and to ensure the protection of health, safety, fundamental rights, democracy and the rule of law, and the environment from AI risks.”
Specifically, the following “general principles applicable to all AI systems” are listed:
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Transparency
- Diversity, non-discrimination, and fairness
- Social and environmental well-being
Within this regulatory framework, it is clearly indicated that measures to ensure AI literacy among AI developers, users, and providers are essential for achieving the principles of AI.
In the event of a violation, substantial fines can be imposed based on global sales (up to either 30 million euros, approximately 4.7 billion yen, or 6% of global sales, whichever is higher), which could potentially preclude businesses from operating AI businesses within the EU.
Therefore, companies currently engaged in AI business in the EU market, including those from Japan, as well as those considering entering the EU market in the future, are required to comply with the new EU AI regulations.
The Background of AI Regulatory Legislation
While generative AI is a convenient tool, it also carries risks that could promote crime and threaten democracy. As AI technology evolves and becomes more widespread, these issues have become unavoidable challenges.
Since 2016, the European Union (EU), the United States, and China have been publishing guidelines and national strategy proposals concerning AI. The EU, in particular, has been advancing the development of regulations for AI and big data, and between 2017 and 2022, significant guidelines, declarations, and regulatory proposals have been created.
For example, the General Data Protection Regulation (GDPR) was enacted in April 2016, the AI Regulation Proposal was announced on April 21, 2021, the European Data Governance Act (DGA) was established on May 30, 2022, and has been in force since September 24, 2023.
These regulations aim to ensure that AI and big data are used safely and fairly across society while simultaneously promoting innovation and economic growth.
The EU has adopted “A Europe fit for the Digital Age” as its digital strategy.
Even after the announcement of the AI Regulation Proposal, in response to the rapid evolution and proliferation of generative AI, the European Commission adopted a revised proposal on June 14, 2023, which added new considerations and requirements for generative AI.
Date | Event |
April 21, 2021 | The European Commission announces the “EU AI Regulation Proposal” |
May 11, 2023 | Amendments passed by the “Committee on the Internal Market and Consumer Protection” and the “Committee on Civil Liberties, Justice, and Home Affairs” |
June 14, 2023 | Amendments adopted by the European Parliament |
October 24, 2023 | Fourth trilogue meeting held Provisional agreement reached |
December 6, 2023 | Final trilogue meeting scheduled Approval by the European Parliament & EU Council Enactment of the “EU AI Regulation” |
Second half of 2024 | Expected to come into force |
Features of the AI Regulation Act
The backbone of the “AI Regulation Act” consists of three main features: “risk-based AI classification,” “requirements and obligations,” and “support for innovation.”
This regulation employs a methodology known as “risk-based approach,” which categorizes AI risks into four levels, with corresponding regulations applied based on these categories.
Specifically, as shown in the table below, prohibitions, requirements, and obligations are defined according to the four risk levels of AI systems. High-risk AI is subject to specific usage restrictions to ensure the safety of human life and body, the protection of self-determination, and the maintenance of democracy and due process.
Risk Level | Usage Restrictions | Target AI Systems | Requirements & Obligations |
AI that contradicts EU values is prohibited | Prohibited | ① Subliminal techniques ② Exploitation of vulnerabilities ③ Social scoring ④ ‘Real-time’ remote biometric identification systems in public areas for law enforcement (except for exceptions) | Prohibited |
・Safety elements of regulated products ・Specific AI systems + AI posing significant risks to health, safety, fundamental rights, and the environment | Compliance with requirements and conformity assessment as a condition | ① Biometric recognition & classification (industrial machinery, medical devices) ② Management & operation of critical infrastructure ③ Education & vocational training ④ Employment, worker management, and access to self-employment ⑤ Access to essential private & public services ⑥ Law enforcement (all subjects are law enforcement agencies) ⑦ Immigration, asylum, and border management (all subjects are the competent public authorities) ⑧ Administration of justice and democratic processes | Strict regulations such as risk management systems, data governance, technical documentation, log retention, human oversight measures, and conformity assessment procedures |
AI systems subject to transparency obligations | Transparency obligations as a condition | ① AI systems that interact with natural persons, such as chatbots ② Emotion recognition systems & biometric classification systems ③ Deepfake generating AI systems | Limited obligations such as ensuring no illegal content is generated, disclosure of copyright-protected data used in training, prior notification of AI use, etc. |
Systems other than those above | No restrictions | Systems other than those above | Recommendation of codes of conduct |
The Impact of AI Regulation on Japan
The EU has been a global pioneer in introducing regulations in areas such as human rights, personal data protection, and environmental conservation, setting the ‘gold standard’ for subsequent national regulatory frameworks.
In Japan, the amendment of the Japanese Personal Information Protection Act was also driven by the need to harmonize decentralized regulations and to address the challenges posed by the EU’s GDPR (General Data Protection Regulation). Furthermore, laws such as the ‘Act on Improving Transparency and Fairness of Specified Digital Platforms’ (enacted on February 1, 2021), have been designed with reference to EU regulations.
Currently, Japan does not regulate AI through hard law but instead adopts a policy of self-regulation through soft law.
As mentioned above, the EU’s ‘AI Regulation’ applies directly to EU member states and has extraterritorial application to businesses operating within the EU, affecting companies located outside the EU as well.
As will be discussed later, multiple laws from different perspectives may apply to the shipment of AI products within the EU, and it is essential to take measures to comply with them. Japanese companies must closely monitor these developments and take appropriate legal measures.
Adoption of Amendments Including Generative AI
The AI Regulation Act is a law that has been amended by the European Union’s trilogue (European Council, European Parliament, and European Commission).
On May 11, 2023, the IMCO (Committee on Internal Market and Consumer Protection) and the LIBE (Committee on Civil Liberties, Justice and Home Affairs) passed amendments to the AI Regulation Act.
These amendments were adopted by the European Parliament on June 14, 2023.
The report includes significant modifications to the legislative proposal, such as the prohibition of predictive policing, numerous additions to the list of high-risk standalone AI, and the establishment of a new and comprehensive role for the AI Office (EAIB, an institution to replace the European Artificial Intelligence Board).
Furthermore, stronger alignment with the GDPR (General Data Protection Regulation) has been proposed, indicating an increase in stakeholder involvement in certain areas, as well as the introduction of specific provisions related to generative and general-purpose AI.
Subsequently, on October 24, 2023, the fourth trilogue on the AI Regulation Act was held, where significant progress was made on politically sensitive issues. Notably, a provisional agreement was reached on the contentious filter mechanism for high-risk AI systems (Art. 6).
Additionally, political guidance was provided on the future direction concerning foundational models/general-purpose AI systems, governance, prohibitions, and law enforcement agencies. The technical team was mandated to work on specific text proposals regarding the aforementioned issues.
About AI Regulations Related to Relevant Laws
AI regulatory laws encompass multiple statutes established from various perspectives. These three laws have been enacted by the European Union (EU) to ensure the protection of personal information in the digital space and to secure fair competition.
The Digital Services Act (DSA)
The Digital Services Act (DSA), enacted on November 16, 2022 (with full implementation scheduled for February 17, 2024), is a comprehensive set of rules related to e-commerce within the EU.
While the EU established the E-Commerce Directive in 2000, it has become increasingly difficult to apply it to the evolving digital environment, including the internet and online platforms. Therefore, the DSA was enacted as a revised and unified EU regulation to address these challenges.
The DSA aims to ensure the proper functioning of intermediary services in the EU internal market, protect users’ fundamental rights, and maintain a safer digital environment. It applies to online intermediary services, hosting services, and online platforms, including Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs).
This legislation comprehensively regulates both B2B and B2C aspects, outlining responsibilities in cases where illegal content is posted and providing guidelines for handling disputes.
Specifically, it mandates measures to eliminate illegal content, products, and services, strengthens the protection of users’ fundamental rights, and demands comprehensive rules for transparency and accountability.
Furthermore, stricter rules are imposed on VLOPs and VLOSEs within the EU that have an average monthly user base of over 45 million people. These designated VLOPs and VLOSEs must adapt their systems, resources, and processes to comply with the DSA within four months of notification, implement mitigation measures, and establish independent systems for legal compliance. They must also conduct audits and an initial annual risk assessment and report to the European Commission.
The DSA is set to be fully applicable from February 17, 2024, and the compliance of businesses other than VLOPs and VLOSEs will be monitored by the European Commission and the authorities of the member states.
Member states are required to establish an independent ‘Digital Services Coordinator’ by February 17, 2024, to supervise compliance with the DSA, and they will be granted enforcement powers, including the imposition of fines for non-compliance.
On the other hand, the European Commission will directly supervise VLOPs and VLOSEs and have the authority to enforce penalties.
The fines for violating the law can amount to up to 6% of the offending company’s total global annual turnover from the preceding financial year.
This legislation is part of the EU’s digital strategy known as “A Europe fit for the Digital Age,” aiming to address new challenges and risks in the evolving digital era.
The Digital Markets Act (DMA)
The European Union’s Digital Markets Act (DMA), which largely came into force on May 2, 2023, aims to ensure a fair and competitive digital market and to prevent certain digital platforms from dominating the market.
The regulation targets designated gatekeepers, setting out their obligations and imposing sanctions such as fines of up to 10% of their total worldwide annual turnover in the event of non-compliance.
A “gatekeeper” refers to the largest digital platforms operating within the European Union that have a sustained position in certain digital sectors and meet specific criteria related to the number of users, annual turnover, and capital. These platforms are thus named because they meet these criteria.
The European Commission is set to designate the latest gatekeepers by September 6, 2023, and these companies will be granted a grace period of up to six months (until March 2024) to comply with the new obligations under the Digital Markets Act. The designated gatekeepers at this time include Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft, with a total of 22 key platform services provided by these gatekeepers being subject to the law.
This legislation aims to prevent large digital platforms from abusing their market power and to make it easier for new entrants to access the market.
GDPR (General Data Protection Regulation)
The GDPR (General Data Protection Regulation) is a new ‘data protection law’ that came into effect on May 25, 2018.
It is a legal framework that sets guidelines for the collection and processing of personal information from individuals both inside and outside the EU. This regulation imposes obligations on organizations that target the EU or collect data related to individuals within the EU.
Related article: Explaining the key points in creating a GDPR-compliant privacy policy[ja]
Trends in Anticipated AI Regulations
We will now discuss the AI systems that companies should pay close attention to, as identified in the aforementioned AI risk classification table.
Prohibition of the Use of Social Credit Scores
One of the AI systems that falls under the “prohibited risks” in the EU regulatory law is the social credit score system. According to the amendment, its use will be completely banned, not only by public institutions but across the board.
The term “social credit score” refers to a system that scores individuals based on their social status and behavior.
In China, it functions as a tool of a surveillance society and is being developed as a national policy in four sectors: “public service,” “commerce,” “society,” and “judiciary.”
Specific restrictions include bans on using airplanes and high-speed trains, exclusion from private schools, prohibition of establishing organizations such as NPOs, exclusion from prestigious jobs, bans from hotels, reduction of internet connection speeds, and the public disclosure of personal information on websites and media. Conversely, high scores can result in various “privileges.”
However, such a system raises concerns about individual privacy and freedom, and debates over its operation continue.
The prohibition of the use of social credit scores within the EU is intended to ensure that the use of AI technology is fair and transparent.
Strengthening Restrictions on Generative AI
One of the AI systems that falls under the “limited risk” category of the EU regulatory law is generative AI.
Generative AI refers to a type of AI that creates new content or solutions based on training data, and it has been gaining attention in recent years with examples like Chat GPT. However, there are various challenges associated with generative AI, which necessitate regulation.
In response to the rapid evolution and proliferation of generative AI, the AI regulatory law has added new considerations and requirements pertaining to generative AI.
Specifically, generative AI vendors, including companies like OpenAI, are required to disclose copyright data used in the training of their Large Language Models (LLMs). This is one of the obligations imposed.
The aim here is to enhance the transparency of generative AI and strengthen the regulation of risk management.
In EU legislation, the principle of “Transparency” has traditionally been emphasized, starting with the GDPR (General Data Protection Regulation). Preemptive disclosure obligations regarding protective measures and the purposes of AI use are imposed on the subjects, and this principle has become an international “gold standard.”
Restrictions on the Use of Emotion Recognition AI
Emotion recognition AI, which falls under the “limited risk” category of the EU regulatory law, is an AI system subject to transparency obligations, and limited duties such as prior notification of AI use are imposed.
Emotion recognition AI refers to AI that can read and interpret human emotional changes.
Specifically, there are four types of emotion recognition AI, which analyze emotions such as happiness, anger, sorrow, and interest, as well as levels of engagement, through microphones, cameras, and sensors:
- Text-based emotion recognition AI: Analyzes written text or voice data converted to text to determine emotions.
- Voice-based emotion recognition AI: Analyzes emotions from human speech.
- Facial expression emotion recognition AI: Reads emotions from facial expressions through cameras.
- Biometric emotion recognition AI: Recognizes emotions from biometric data such as brainwaves and heart rate.
These technologies are being utilized in various settings, including customer service, call centers, and sales positions. As the technology advances, its application in the medical field is also anticipated.
However, the protection of privacy concerning biometric information and personal data collected, as well as the establishment of corresponding laws, will be necessary.
Summary: The Current State and Future Outlook of AI Regulation
We have discussed the current state and future outlook of the EU’s “AI Regulation,” as well as its impact on Japanese companies. The world’s first “EU AI Regulation” has a high potential to become an international “gold standard.”
For Japanese companies looking to enter the EU market, it will be crucial to keep a close eye on the developments of this AI Regulation. We recommend consulting with attorneys who are knowledgeable in international legal affairs and AI technology regarding AI regulations in the EU.
Guidance on Measures by Our Firm
Monolith Law Office is a law firm with extensive experience in both IT, particularly the internet, and legal matters. The AI business is fraught with numerous legal risks, and the support of attorneys well-versed in AI-related legal issues is essential. Our firm provides sophisticated legal support for AI businesses, including ChatGPT, through a team of AI-knowledgeable attorneys and engineers. Our services include contract drafting, legality reviews of business models, protection of intellectual property rights, and privacy measures. Please refer to the article below for more details.
Areas of practice at Monolith Law Office: AI (including ChatGPT) Legal Affairs[ja]