MONOLITH LAW OFFICE+81-3-6262-3248Weekdays 10:00-18:00 JST

MONOLITH LAW MAGAZINE

IT

The Current State of Laws Regulating AI: A Comparison between Japan and the EU, and Key Points for Countermeasures

IT

The Current State of Laws Regulating AI: A Comparison between Japan and the EU, and Key Points for Countermeasures

Generative AI, such as ChatGPT, has become a major trend. Now being incorporated into the business scene, generative AI is said to be the catalyst for the “Fourth AI Boom.” In line with this, efforts are underway globally to establish legal frameworks for regulating AI.

This article will discuss laws related to AI, including the proper handling of intellectual property rights and confidential information such as personal data.

The Definition and History of AI (Artificial Intelligence)

AI (Artificial Intelligence) refers to the concept of “artificial intelligence.” Legally, there is no strict definition, and various definitions have been proposed. Here are some examples:

SourceDefinition / Explanation
“Kōjien”A computer system equipped with intellectual functions such as reasoning and judgment.
“Britannica Encyclopedia”Science and Technology > Computer AI. The ability of digital computers or computer robots to perform tasks related to intelligent beings.
Article from the Japanese Society for Artificial Intelligence “AI as General Knowledge”The question of “What is AI?” does not have a simple answer. Even among AI experts, there is significant debate, to the extent that it could fill an entire book. Summarizing the common ground, it could be said that AI is the technology that mechanically realizes the same intellectual tasks as humans.
Academic Paper “Deep Learning and Artificial Intelligence”AI is a field of study that attempts to constructively elucidate the mechanisms of human intelligence.
Academic Paper “In Search of the Ideal Society with Artificial Intelligence”Information technology, including AI, is ultimately a tool.

AI is also described as various technologies, software groups, computer systems, and algorithms that replicate human intellectual abilities on a computer.

As for specialized AI, the following are examples:

  • Natural Language Processing (machine translation, syntactic analysis, morphological analysis, RNN, etc.)
  • Expert systems that mimic the reasoning and judgment of experts
  • Image and voice recognition that detect and extract specific patterns from data

The field of AI has been continuously researched and developed since the dawn of computers in the 1950s. The first AI boom, up until the 1970s, focused on “search and reasoning” research. The second AI boom in the 1980s saw the birth of expert systems through “knowledge representation” research, marking two paradigm shifts that fueled the booms.

Since the 2000s, the emergence of big data and the recognition of the usefulness of deep learning in image processing, especially after the introduction of Alexnet in 2012, have rapidly accelerated research, leading to the third AI boom.

Between 2016 and 2017, AI that incorporated deep learning and reinforcement learning (Q-learning, policy gradient methods) emerged.

The main revolution of the third AI boom is evident in natural language processing and image processing through sensors, but it also significantly impacts technology development, sociology, ethics, and economics.

On November 30, 2022, the launch of ChatGPT by OpenAI as a versatile tool for natural language processing generation AI garnered significant attention, energizing the generative AI business. Some people refer to this social phenomenon as the fourth AI boom.

Business Scenarios Where AI Laws Must Be Reviewed

Business scenarios requiring legal regulation review

Generative AI, a type of AI, is a useful tool but also carries risks such as spreading misinformation, facilitating crime, and sometimes even threatening democracy.

These risks associated with AI have become unavoidable challenges. Therefore, we will explain the business scenarios where legal regulations should be reviewed, from both the user and provider perspectives.

Using Text-Generating AI

Since the launch of ChatGPT in November 2022, text-generating AI has been spotlighted globally as a versatile tool that can handle complex requests, promising efficiency and cost-effectiveness in work.

However, the risks associated with the use of text-generating AI have also become known. It is essential to be aware of these potential risks and the laws that must be complied with to avoid them.

ChatGPT, representing text-generating AI, carries the risk of leaking information (prompts) entered by users if no measures are taken. ChatGPT has the capability to collect, store, and use prompts, posing a risk of leaking personal information, corporate secrets, and confidential information obtained through Non-Disclosure Agreements (NDAs).

In addition, there are risks specific to ChatGPT, such as generating and spreading misinformation (hallucinations) and copyright infringement. Therefore, fact-checking the output is indispensable.

Using Image-Generating AI

When utilizing image-generating AI in business, it is crucial to keep in mind the risk of copyright infringement.

According to OpenAI, the copyright of images or texts generated by ChatGPT and similar tools generally belongs to the user. Users can utilize ChatGPT for any purpose, including commercial use.

However, caution is necessary in the following aspects:

The training data of ChatGPT includes a vast amount of content published on the internet, most of which are copyrighted works (texts, images, music, videos, etc.). Therefore, the content generated could potentially infringe on someone else’s copyright.

AI Development and Provision of Generative AI Services

The AI business involves various laws, and as the legal framework is being developed globally, it is essential to comply with existing laws while being flexible to adapt to new ones.

In the next chapter, we will explain the laws related to AI in Japan and the world’s first international “AI Regulation Act” enacted by the EU in December 2023.

Laws on AI in Japan

Currently in Japan, AI is not regulated by laws with coercive force, and the policy is to deal with it through self-regulation. Here, we will explain the current laws that should be noted when utilizing AI.

Reference: Ministry of Economy, Trade and Industry | “Governance Guidelines for the Implementation of AI Principles ver. 1.1″[ja]

Copyright Law

In January 2019 (Heisei 31), the “Revised Copyright Law” was enacted, introducing a new provision for “information analysis” (Article 30-4, Paragraph 1, Item 2 of the same law) under the limitations on rights (exceptions where permission is not required). Uses that do not aim to enjoy the thoughts or feelings expressed in copyrighted works, such as information analysis during AI development and learning stages, are allowed without the copyright holder’s permission in principle.

This amendment has clarified that machine learning, including AI’s deep learning, falls under “information analysis” by defining it.

Information analysis (extracting information related to the language, sound, image, and other elements constituting the information from a large number of copyrighted works or other vast amounts of information, and performing comparison, classification, or other analyses)

Copyright Law Article 30-4, Paragraph 1, Item 2

However, it is important to note that creations generated using AI could be considered copyright infringement if they are found to be similar or reliant on other copyrighted works.

Furthermore, inputting copyrighted material into ChatGPT as a prompt could potentially infringe on reproduction rights. Modifying someone else’s copyrighted work using generative AI could also lead to infringement of rights such as adaptation rights.

According to OpenAI’s terms of use, the rights to content created by ChatGPT belong to the user, and commercial use is allowed. However, if it is difficult to determine whether the content infringes copyright, consulting a professional is recommended.

In the unfortunate event of being accused of copyright infringement by the copyright holder, one may be subject to civil liabilities (injunctions against use, compensation for damages, solatium, restoration of honor, etc.) or criminal liabilities.

Unfair Competition Prevention Law

On July 1, 2019 (Heisei 31), the revised Unfair Competition Prevention Law was enacted. Previously, it was difficult to prevent unfair competition for items not covered by patent law or copyright law, or items not considered “trade secrets” under the Unfair Competition Prevention Law.

Therefore, this amendment has established civil measures (claims for injunctions, estimation of damages, etc.) against malicious acts such as unauthorized acquisition or use of valuable data (limited provision data).

Laws on the Use of AI in the EU

Legal regulations related to the use of AI in the EU

The legal system of the EU is composed of three parts: primary law (treaties), secondary law (EU legislation), and case law. Secondary law, based on primary law (treaties), consists of regulations that directly or indirectly bind member states within the EU and is referred to as EU law (derived law). There are broadly five types of secondary law, but the EU’s “AI Regulation” falls under the category of Regulations, which are unified rules that directly bind member states of the EU.

On the other hand, Directives are a form of legislation that imposes indirect legal obligations on each EU member state to enact or amend national laws to implement the contents of the Directive. The deadline for implementation is generally within three years after publication in the Official Journal of the EU.

Related article: A Must-Read for Businesses Expanding into Europe: Key Points on EU Law and Legal System[ja]

In this chapter, we will explain the latest trends concerning “Directives” and “Regulations” among the legal regulations related to the use of AI in the EU.

AI Liability Directive Proposal

On September 28, 2022, the European Commission published the “AI Liability Directive Proposal” along with a revision of the “Product Liability Directive.” It establishes rules regarding the legal liability of AI businesses in the EU (European Union) that are compliant with the “AI Regulation Act,” becoming a crucial legal framework. Since it will be subject to the EU’s “Collective Redress Directive” applied from June 2023, it is necessary for related Japanese companies to understand its content.

This represents a significant change in the legal rules on civil liability for software, including AI systems in the EU, suited for the digital age’s circular economy and global value chains.

Related article: Current Status and Outlook of the AI Regulation Act in the EU? Explaining the Impact on Japanese Companies[ja]

The purpose of the “AI Liability Directive Proposal” is to establish rules for civil liability based on non-contractual obligations for damages caused by AI systems, thereby improving the functioning of the market within the EU.

In other words, it is necessary to note that the liability for damages due to negligence (such as tort liability) is not limited to those arising from insufficient safety covered by the “Product Liability Directive” and does not apply to contractual liabilities (liability for non-performance of obligations & liability for non-conformity of contracts).

For example, damages such as discrimination by AI recruitment systems are also considered to be covered.

The directive proposal introduces measures to reduce the burden of proof for developers of “high-risk AI systems” defined in the “AI Regulation Act” to address the AI black box problem, including the “presumption of causation” and a “disclosure of evidence system.”

If one fails to comply with the disclosure order, the “AI Liability Directive Proposal” mandates the presumption of duty of care violation and causation, while the “Product Liability Directive Revision” mandates the presumption of defect and causation, imposing penalties stronger than those in Japanese civil procedure law for compliance.

The directive proposal is limited to the first stage as “measures to reduce the burden of proof” due to the black-box nature of AI, with the establishment of plaintiff qualification, evidence disclosure, evidence preservation, and presumption of causation, and each requirement is defined.

The second stage involves provisions for review and evaluation. The European Commission will set up a monitoring program to review incident information and assess the appropriateness of imposing strict liability (liability without fault) on operators of high-risk AI systems and the necessity of introducing compulsory insurance, among other measures, and report to the European Council and the European Parliament, among others.

Revision of the Product Liability Directive

The “Product Liability Directive” is an EU law enacted in 1985 to protect consumers, defining the liability of manufacturers for damages caused by defective products.

The proposed revision includes “software” as a new category of “product” under product liability, applying strict liability to operators of AI systems if the AI system is found to be defective. Additionally, the criteria for determining a defect have been updated to include the capability for continuous learning after installation and software updates.

Under the current Japanese Product Liability Law, software is generally not considered a movable object, and therefore, does not fall under the category of “product” covered by the law. However, the revision addresses this by changing the concept of “product”. The amendment also introduces “measures to reduce the burden of proof”, which could significantly impact software, including AI systems and advanced technology products.

AI Regulation Act

The “AI Regulation Act (AI Act)” is a comprehensive set of unified EU rules (secondary legislation) targeting AI businesses, consisting of 85 articles. It is the world’s first international law regulating AI, enacted following a provisional agreement reached by the European Commission, the European Parliament, and the Council of the European Union on December 9, 2023. It is expected to come into effect and be fully implemented in 2024.

This law is a cornerstone of the EU’s digital strategy known as “A Europe fit for the Digital Age,” aimed at addressing new challenges and risks in the evolving digital era. It is part of a broad AI package designed to ensure the safety of AI and fundamental rights, and to strengthen efforts, investments, and innovation in AI across the EU.

The EU’s “AI Regulation Act” applies directly to EU member states and also has extraterritorial application for businesses operating within the EU, affecting companies located outside the EU as well.

In case of violations, substantial fines can be imposed (up to €30 million or approximately 4.7 billion yen, or 6% of the worldwide annual turnover, whichever is higher), potentially barring businesses from operating in the AI sector within the EU.

Therefore, companies that have already introduced AI in the EU market, including those from Japan, as well as those considering entering the EU market in the future, are required to comply with the new EU AI regulations.

The essence of the “AI Regulation Act” is based on three main features: “risk-based AI classification,” “requirements and obligations,” and “support for innovation.”

The regulation targets businesses deploying AI systems and services in the European market, including AI developers, deployers, providers, importers, distributors, and users.

AI risk levels are categorized into four stages, with corresponding regulations applied. Ensuring AI literacy among AI developers, users, and providers is essential for achieving AI principles, as clearly indicated by the guidelines. For more details, please see the related article.

Related article: Current Status and Outlook of AI Regulation in the EU: Implications for Japanese Companies[ja]

Key Points to Consider in AI-Related Laws

Key Points to Consider in AI-Related Laws

This section primarily discusses the legal considerations for companies intending to use generative AI.

Copyright of AI-Generated Works

When it comes to works generated by generative AI, the following legal issues should be considered:

  • Whether the work infringes on copyright
  • Whether copyright is granted to works generated by generative AI

As mentioned earlier, works generated by ChatGPT may infringe on copyright if they are found to be similar or reliant on copyrighted material. On the other hand, is copyright granted to works created by generative AI?

According to the Copyright Law, a “work” is defined as an “original expression of thought or emotion.” Since AI does not possess thoughts or emotions, it is argued that content created by generative AI is not eligible for copyright.

However, the process of generating content with AI is a black box for users, making it extremely difficult to produce the desired content. Therefore, if a user’s originality is recognized in the prompts, it could be argued that the user’s “thoughts or emotions” have been “creatively expressed” through generative AI, potentially qualifying for copyright.

Handling of Personal Information When Using AI

When using AI, it is necessary to be aware of the potential for violating the Japanese Personal Information Protection Law. Measures such as not inputting personal or privacy-related information are required.

If personal information is entered in the prompt, it may constitute a third-party provision of personal information to the service provider. Generally, the consent of the individual is required to provide personal information to a third party, so without such consent, it could potentially violate the Japanese Personal Information Protection Law.

In ChatGPT, even if personal information is accidentally entered, it is designed not to output that information in the chat. This is due to OpenAI’s policy of not storing or tracking personal information, but caution is advised as other services and platforms may differ.

Risk Management for Companies Involved with AI

Risk management varies depending on a company’s business strategy, the purpose of AI usage, and related legal regulations. Therefore, it is crucial to implement appropriate risk measures tailored to the specific situation and objectives.

Companies utilizing generative AI should consider the following points to minimize risks:

  1. Human Resource Development: Proper use of generative AI requires specialized knowledge and skills. It is important to educate and train employees to understand the correct usage.
  2. Establishment, Implementation, and Operation of Internal Guidelines: Formulating internal guidelines on the use of generative AI and ensuring compliance by employees can reduce risks.
  3. Creation of a Promotion Organization for Utilization and Risk Management: Establishing an organization to promote the use of generative AI and placing a team within the organization responsible for risk management is effective.
  4. System Implementation: Careful selection and design of systems are necessary for the proper implementation of generative AI.

Moreover, the risks associated with the use of generative AI are diversifying, including information leakage, infringement of rights and privacy, concerns about the accuracy and safety of information, and the risk of bias. To avoid these risks, it is essential to introduce an appropriate governance and risk management framework.

Related article: “Risks of Corporate Adoption of ChatGPT: Explaining Cases and Measures Against Confidential Information Leakage”[ja]

Summary: The Need to Keep Up with AI Laws as They Are Still Being Developed

As the legal framework for AI business, including the world’s first international “AI Regulation Act” enacted in the EU on December 9, 2023, is still in the process of being established, companies are required to comply with existing laws while also being flexible in adapting to new legislation.

In Japan, there are currently no laws directly regulating AI, but it is essential to have a good understanding of related laws such as the Copyright Law, Personal Information Protection Law, and Unfair Competition Prevention Law, and to respond appropriately. Furthermore, it is necessary to closely monitor future amendments to these related laws and respond swiftly.

Guidance on Measures by Our Firm

Monolith Law Office is a legal practice with extensive experience in both IT, particularly the internet, and law. The AI business comes with many legal risks, and the support of attorneys well-versed in legal issues related to AI is indispensable.

Our firm provides advanced legal support for AI businesses, including those involving ChatGPT, through a team of lawyers and engineers proficient in AI. This support encompasses contract drafting, legality reviews of business models, protection of intellectual property rights, and privacy measures. Details are provided in the article below.

Areas of practice at Monolith Law Office: AI (including ChatGPT) Legal Affairs[ja]

Managing Attorney: Toki Kawase

The Editor in Chief: Managing Attorney: Toki Kawase

An expert in IT-related legal affairs in Japan who established MONOLITH LAW OFFICE and serves as its managing attorney. Formerly an IT engineer, he has been involved in the management of IT companies. Served as legal counsel to more than 100 companies, ranging from top-tier organizations to seed-stage Startups.

Return to Top