EU AI Regulation and the Necessary Responses for Japanese Companies

On July 12, 2024, the European Union (EU) promulgated the ‘EU AI Act,’ which came into effect on August 1 of the same year.
This law regulates the use and provision of AI systems within the EU, and from 2025 onwards, it will require certain responses from Japanese companies as well.
Specifically, just as operators of e-commerce sites within Japan need to comply with the EU’s ‘General Data Protection Regulation (GDPR),’ Japanese companies providing AI-related products or services to customers within the EU may also be subject to the EU AI Act.
Here, we will explain the compliance measures required by the relevant businesses following the enforcement of this law, including the risk classification of AI systems and conformity assessments.
Understanding the Difference Between ‘Regulations’ and ‘Directives’ in the EU
Before delving into the explanation of the AI Regulation itself, it is necessary to understand the difference between ‘regulations’ and ‘directives’ within EU law as a premise.
Firstly, ‘regulations’ are legislative acts that apply directly to all member states, companies, etc., within the EU. This means that they take precedence over the national laws of EU member states and ensure that a unified set of rules is applied throughout the EU. Therefore, once a regulation comes into effect, the same regulatory content is applied across all EU member states.
On the other hand, ‘directives’ are legislative acts aimed at harmonizing or aligning regulatory content among EU member states. However, directives are not directly applicable to member states as a rule; each country must transpose the content defined by the directive into its national law. Typically, member states are required to enact or amend their national laws within three years of the directive’s publication in the Official Journal of the EU.
A personalityistic of ‘directives’ is that they allow a certain degree of discretion to member states when transposing them into national law, which can lead to differences in the legal content of each country. In other words, it is necessary to be aware that laws based on ‘directives’ are not completely unified within the EU, and there may be slight variations from country to country.
With this distinction in mind, the AI Regulation is being established as a ‘regulation.’ This means that the AI Regulation is directly applicable to businesses located within the EU.
Related article: Essential Reading for Businesses Expanding into Europe: Key Points on EU Law and Legal Systems
Extraterritorial Application of AI Regulation Laws
What is Extraterritorial Application?
Extraterritorial application refers to the application of a country’s laws to actions that take place outside the sovereign territory of that country. The rationale behind allowing extraterritorial application stems from the globalization of the economy and the internationalization of corporate activities, aiming to ensure fair and proper conduct of economic activities worldwide.
An example that brought widespread recognition to this concept is the GDPR (General Data Protection Regulation) of the EU. Under the GDPR, even businesses without a base in the EU may be subject to its regulations (extraterritorial application) if they meet the following criteria:
- Providing services or goods to individuals within the EU
- Processing personal data with the purpose of monitoring the behavior of individuals within the EU
For instance, a guideline from 2020 clarified that the processing of personal data related to employees sent on business trips to the EU by companies outside the EU is “not applicable.” However, this was initially a case where extraterritorial application was debated.
Extraterritorial Application of the EU AI Regulation Law
The EU AI Regulation Law also allows for extraterritorial application to businesses located outside the EU. The following entities or activities are subject to this regulation:
- Providers: Those who develop AI systems or GPAI models, those who commission the development and introduce them to the market, or those who start operating an AI system under their own name or trademark
- Users: Those who use an AI system under their own authority (excluding personal and non-professional use of AI systems)
- Importers: Importers located or established within the EU who introduce AI systems bearing the name or trademark of a natural or legal person established outside the EU into the EU market
- Distributors: Natural persons or legal entities in the supply chain that provide AI systems to the EU market, other than providers or importers
As such, even businesses located outside the EU are directly subject to the EU AI Regulation Law when they provide, operate, import, or use AI systems or GPAI models within the EU.
Key Features of the EU AI Regulation: A Risk-Based Approach

What is a Risk-Based Approach?
One of the most distinctive features of the EU AI Regulation is its implementation of regulation based on the “nature and extent of risk” (risk-based approach).
A “risk-based approach” refers to a method of adjusting the intensity of regulation based on the content and degree of risk. This approach determines the strictness of regulations for an AI system according to the severity of the risks it may pose.
Specifically, AI systems with higher risks are subject to more stringent regulations, while those with lower risks are regulated more leniently. This ensures that systems with lower risks avoid excessive regulation, while those with higher risks are appropriately monitored and managed.
AI Systems with Unacceptable Risks Under Japanese Law
Firstly, AI systems considered to have unacceptable risks are seen as threats to people and are, in principle, prohibited. For example, voice-activated toys that encourage dangerous behavior in children, manipulating the cognition or behavior of specific vulnerable user groups, fall into this category. Social scoring systems that classify people based on behavior, socio-economic status, or personal personalityistics are also banned. Furthermore, ‘real-time and remote biometric identification systems’ that use technologies like facial recognition to compare human biometric data with reference databases for remote identification of individuals are also principally prohibited.
However, there are exceptions where the use of these systems is permitted. Specifically, real-time remote biometric identification systems may only be used in limited and serious incidents. On the other hand, post-event remote biometric identification systems are permitted only when authorized by a court for the purpose of prosecuting serious crimes.
Additionally, exceptions for implementation include searching for missing children or potential crime victims, preventing specific and imminent threats to human life or safety, and preventing terrorist attacks, as well as detecting perpetrators or suspects of serious crimes and determining their whereabouts. These exceptions are subject to strict restrictions, including the principle of prior court approval, demanding cautious operation of AI systems.
High-Risk AI Systems
Next, AI systems classified as high-risk have the potential to negatively impact safety or fundamental human rights. These systems are permitted for use provided they meet certain requirements and obligations (conformity assessments).
High-risk AI systems are broadly categorized into two groups. First, AI systems used in products that fall under EU product safety legislation, which includes, for example, toys, aviation, automobiles, medical devices, and lifts. Second, AI systems that are required to be registered in the EU database and fall under specific sectors. These sectors encompass the management and operation of critical infrastructure, education and vocational training, employment and worker management, access to essential public services and benefits, law enforcement, immigration and asylum, border control, and support for the interpretation and application of law.
High-risk AI systems require evaluation before market introduction and throughout their lifecycle. Furthermore, the right to lodge complaints about AI systems with designated national authorities is recognized.
Broadly speaking, machinery and vehicles that are predicated on ensuring human life and physical safety can be considered targets for high-risk AI. For instance, autonomous driving AI could fall under this category, so when Japanese companies develop and expand autonomous driving AI overseas, they must carefully consider whether it meets the requirements of high-risk AI and respond appropriately.
AI Systems with Limited Risks
AI systems with limited risks are anticipated to pose transparency risks, as well as risks of impersonation, manipulation, and fraud. Specifically, chatbots, deepfakes, and generative AI fall into this category, and according to the European Parliament’s view, the majority of AI systems currently in use are classified under this category. For instance, automatic translation systems, gaming consoles, robots that perform repetitive manufacturing processes, and even AI systems like the “Eureka Machine” are included here.
While generative AI is not classified as high-risk, it is required to meet transparency requirements and comply with EU copyright law. Specifically, the following measures are necessary:
- Clearly disclosing that the content has been generated by AI
- Designing models to prevent the creation of illegal content
- Disclosing an overview of the copyrighted data used to train the AI
Furthermore, advanced and influential general-purpose AI models (GPAI models), such as “GPT-4,” may pose systemic risks and thus require thorough assessment. In addition, there is an obligation to report to the European Commission in the event of a serious incident. Moreover, content that has been generated or altered by AI (images, audio, video files, deepfakes, etc.) must be clearly marked as AI-generated, ensuring that users can recognize the content as such.
AI Systems with Minimal Risk
Finally, for AI systems that pose minimal risk, no specific regulations have been established in Japan. Examples of such systems include spam filters and recommendation systems. In this category, rather than regulations, the development and adherence to codes of conduct are merely encouraged.
Requirements and Obligations for High-Risk AI Systems in Japan

Duties of Providers, Users, Importers, and Distributors
As distinguished above, high-risk AI systems in Japan are subject to particularly stringent regulations due to the severity of their risks, and specific duties are imposed on providers and users.
Firstly, providers, users, importers, and distributors are required to establish a risk management system (Article 9). This mandates the construction and implementation of a system to identify and properly manage the inherent risks of high-risk AI systems, as well as its documentation and maintenance. Regarding data governance (Article 10), the use of training, validation, and testing datasets that meet quality standards is required. This is because the quality and reliability of data must be strictly managed even during the development phase of AI systems.
Furthermore, the creation of technical documentation is obligatory (Article 11). This documentation must contain information necessary to demonstrate that the high-risk AI systems comply with regulatory requirements and must be prepared to be provided to the competent authorities of member states or third-party certification bodies. Additionally, the design and development of a log function that automatically records events during the operation of AI systems are required (Article 12). High-risk AI systems must also be registered in a database under EU control before being placed on the market, and providers are responsible for establishing and maintaining a documented quality control system.
Provider Obligations
Providers have the duty to retain technical documentation, documents related to the quality management system, approvals or decisions by third-party certification bodies, and other related documents for 10 years after market introduction or the start of operation, and to submit them upon request from domestic regulatory authorities. Thus, providers are responsible for maintaining the quality and safety of AI systems over the long term and ensuring transparency.
User Obligations
On the other hand, users are also imposed with specific obligations associated with the use of high-risk AI systems. Users must retain logs automatically generated by high-risk AI systems for an appropriate period in light of the intended purpose of those systems, unless otherwise provided by EU law or the law of the member states. Specifically, retention for at least six months is mandated.
Moreover, when starting the operation or use of high-risk AI systems in the workplace, the employer, who is the user, has the obligation to notify employee representatives and affected employees in advance that the system will be used. This is established from the perspective of protecting workers’ rights and ensuring transparency.
In this way, strict requirements and obligations are set for both providers and users of high-risk AI systems in Japan. Especially when dealing with advanced AI technologies such as medical devices or autonomous driving systems, businesses may need to conduct conformity assessments and undergo reviews by third-party certification bodies, considering the consistency with existing regulatory frameworks, thus requiring careful and planned responses.
Phased Implementation Schedule of the AI Regulation Act in Japan

The EU AI Regulation Act is set to be implemented in stages from its announcement to its application. This requires businesses to prepare and respond according to each phase.
On July 12, 2024 (2024), the AI Regulation Act was published in the Official Gazette, and it came into effect on August 1 of the same year. At this stage, businesses are required to confirm and consider the content of the regulations.
On February 2, 2025 (2025), the provisions related to ‘General Principles’ and ‘AI Systems with Unacceptable Risks’ will be applied. If a business handles an AI system with unacceptable risks, it must immediately cease its operation.
Subsequently, on May 2, 2025 (2025), the Codes of Practice for General Purpose AI (GPAI) model providers will be published. Businesses must take action based on these Codes of Practice.
Later, on August 2, 2025 (2025), the provisions related to ‘GPAI Models’ and ‘Penalties’ will be applied, and the appointment of regulatory authorities will take place in each member state. At this stage, providers of GPAI models must comply with the relevant regulations.
On February 2, 2026 (2026), guidelines on the implementation methods of AI systems based on the AI Regulation Act will be published. At the same time, post-market monitoring for high-risk AI systems will become mandatory, and businesses will be required to establish a system to comply with this.
Furthermore, on August 2, 2026 (2026), the provisions related to ‘High-Risk AI Systems’ listed in Annex III will be applied. At this point, member states must establish AI regulation sandboxes, and compliance with the regulations for the high-risk AI systems becomes mandatory.
Finally, on August 2, 2027 (2027), the provisions related to ‘High-Risk AI Systems’ listed in Annex I will be applied. This will mandate compliance with the regulations for AI systems specified in Annex I.
In this manner, the AI Regulation Act will be implemented in stages over several years, with regulations being applied sequentially according to the level of risk. Businesses must accurately understand each application period and proceed with the necessary responses for the AI systems concerned.
Related article: What is the Current State and Outlook of the AI Regulation Act in the EU? Explaining the Impact on Japanese Companies
Guidance on Measures Provided by Our Firm
Monolith Law Office is a law firm with extensive experience in both IT, particularly the internet, and legal matters.
AI business comes with numerous legal risks, and the support of attorneys well-versed in AI-related legal issues is essential. Our firm provides sophisticated legal support for AI businesses, including those involving ChatGPT, through a team of AI-knowledgeable attorneys and engineers. Our services include contract drafting, legality reviews of business models, intellectual property protection, and privacy compliance. Details are provided in the article below.
Areas of practice at Monolith Law Office: AI (ChatGPT, etc.) Legal Affairs
Category: IT

![[2025 Edition] Legal Interpretation of Crypto Assets: When They May Be Treated as Securities and Key Considerations for Operators](https://monolith.law/en/wp-content/uploads/sites/6/2025/08/Shutterstock_2049735344.webp)


















