How to Prevent AI Lies and Copyright Infringement: Internal AI Policies for Safely Utilizing AI-Generated Content

Can AI be transformed into a driving force for business, or will it trigger brand damage? The turning point lies in the presence or absence of effective “AI Internal Regulations.” Particularly, issues such as copyright infringement and the spread of misinformation due to hallucinations can, once they occur, not only lead to legal sanctions under Japanese law but also cause an instant loss of the trust accumulated from customers.
This article explains the guidelines for establishing a compliance framework and regulations that should be initiated now to achieve safe AI operations while maintaining competitiveness in business in Japan.
Pitfalls in the “Quality” and “Rights” of AI-Generated Content Under Japanese Law
Generative AI has established itself as an attractive tool that significantly reduces the immense effort required to create something from scratch, dramatically enhancing efficiency in tasks ranging from creative work to administrative duties. However, behind this convenience lies legal risks fundamentally different from those associated with traditional IT tools under Japanese law.
The outputs produced by generative AI are probabilistically derived based on vast amounts of existing training data. Due to the black-box nature of its generation process, there is a constant risk that users may unknowingly infringe on others’ rights or spread plausible falsehoods as truths.
In the business context, the excuse “because AI created it” does not hold. Companies that integrate AI into their operations and individuals who use it are legally responsible for the output results. To avoid situations such as defamation due to the dissemination of inappropriate information or claims for damages due to copyright infringement, it is essential not to leave matters to individual discretion but to establish clear “guardrails” as an organization. It is necessary to swiftly develop robust internal regulations based on the latest laws and guidelines to prevent becoming either a perpetrator or victim of copyright infringement and to stop the spread of misinformation.
Risk 1: Avoiding Becoming a Perpetrator of Copyright Infringement Under Japanese Law

When utilizing generative AI in business, the most significant concern is the risk of infringing on others’ copyrights and being held legally accountable under Japanese law. Due to the nature of generative AI, it tends to produce content that reflects the personalityistics of existing copyrighted works included in its training data. Understanding this process correctly and clearly identifying when an action is considered “infringement” is the first step in implementing defensive measures.
The Mechanism of Generative AI and Copyright Under Japanese Law
Generative AI creates new content by extracting patterns from vast amounts of data it has learned, based on the input instructions (prompts). In this process, AI does not understand the “meaning of information” but merely predicts the next word or pixel based on statistical probabilities. Therefore, there is a possibility that content resembling the works of specific creators or particular copyrighted materials may be generated, contrary to the user’s intentions.
According to the “Concepts on AI and Copyright” published by the Agency for Cultural Affairs, the determination of copyright infringement by AI-generated content is conducted under the same framework as regular copyrighted works in Japan. Specifically, infringement is established when “similarity” and “dependence” on existing copyrighted works are recognized.
Reference: Agency for Cultural Affairs|About AI and Copyright
Two Key Requirements for Infringement Under Japanese Copyright Law: “Similarity” and “Dependence”
Similarity refers to a state where the generated output is identical to or shares essential personalityistics with the creative expression of an existing work. On the other hand, dependence means that the work was created “based on” an existing work. In traditional cases of copyright infringement, the focus was on whether the creator had seen the work in question. However, in the case of generative AI, the critical factor is whether the work was included in the AI’s training data.
A particularly problematic issue arises when dependence may be recognized even if the user was unaware of the existing work, provided the AI had learned it. This is referred to as the “black box problem.” According to the Japanese Agency for Cultural Affairs, even if the content of the training data is unknown, dependence is more likely to be presumed if the AI has extensively learned from information available on the internet and the generated work is significantly similar to existing works.
Reference: Agency for Cultural Affairs | FY2023 Copyright Seminar AI and Copyright (P48)
Scope and Limitations of the Rights Restriction Provision (Article 30-4)
Under Japanese Copyright Law, Article 30-4, it is stipulated that during the learning phase of AI, works can be used without the copyright holder’s permission if the purpose is not to “enjoy” the thoughts or emotions expressed in the work. This provision broadly recognizes the use for non-enjoyment purposes, such as information analysis, and serves as a crucial legal foundation supporting AI development in Japan.
However, this provision is strictly applicable to the “learning” phase and does not extend to the “utilization” phase of generated outputs. Additionally, when conducting additional learning (such as LoRA) with the aim of outputting a specific creator’s style, it is important to note that the purpose of “enjoyment” may coexist, potentially excluding it from the application of Article 30-4.
Measures in AI Internal Regulations: Selection of Prompts and Services
Under the AI internal regulations, it is effective to impose restrictions at the input stage to minimize the risk of dependency. Specifically, it is advisable to prohibit the inclusion of specific artist names or particular work titles in prompts as a general rule. Additionally, for functions like Image-to-Image (i2i) that involve uploading existing images for generation, it is essential to strictly limit usage to images owned by the company or materials with clear rights relationships.
Furthermore, the selection of AI services to be used is also crucial. Some paid plans offer “copyright indemnity” to cover the risk of copyright infringement. However, it is recommended to scrutinize the conditions for its application and incorporate a process in the regulations to select services that meet the company’s safety standards.
The following table summarizes the checkpoints that should be organized in internal regulations to avoid copyright infringement.
| Checkpoints | Specific Regulatory Direction | Underlying Perspective |
| Prompt Input | Prohibition of inputting specific author names, work titles, and proper nouns | To prevent direct evidence of dependency |
| Additional Learning (LoRA, etc.) | Restriction on learning aimed at replicating the style of specific rights holders | To avoid the risk of rights infringement due to coexistence of enjoyment purposes |
| Use of External Images | Limit to company-owned rights materials or licensed materials | To block the occurrence of dependency in i2i functions |
| Service Selection | Adoption of plans that allow commercial use and have high transparency in rights relationships | To ensure compliance with AI provider’s terms of use and confirm indemnity systems |
Risk 2: Does Copyright Arise for AI-Generated Works Under Japanese Law? (To Avoid Becoming a Victim)
When using AI to create deliverables, it is crucial to consider whether your company can assert its rights if these deliverables are used by another company without permission. Under the current interpretation of Japanese law, there is a principle that copyright does not arise for works generated autonomously by AI. To secure rights over these creations and protect them as intellectual property, companies must meet certain conditions.
Definition of “Work” and the Threshold for Creativity Under Japanese Copyright Law
Under Article 2, Paragraph 1, Item 1 of the Japanese Copyright Act, a “work” is defined as something that creatively expresses “thoughts or emotions.” Since AI is a machine that possesses neither “thoughts” nor “emotions,” content automatically generated by AI is not recognized as a work. Therefore, results obtained merely by inputting a short prompt into AI are generally not considered to have “creative contribution” and are not legally protected. Such content is placed in a state where it is freely available for anyone to use, akin to the public domain.
For a work to be recognized, a human must skillfully use AI as a “tool,” and the human must be acknowledged for having “creative intent” and “creative contribution.” The key factor in the emergence of rights is whether the human substantially controls the content of the expression beyond mere instructions.
Utilizing as a “Tool” to Secure Rights Under Japanese Law
To determine what actions are recognized as creative contributions under Japanese law, the following factors are comprehensively considered. Firstly, the content of the prompt must be specific and detailed. Instead of simply stating “a picture of a cat,” it is necessary to specify details such as composition, color usage, and brushwork, refining the ideal expression through trial and error. Secondly, it involves humans making edits or revisions to the AI-generated output, or combining multiple generated items to create a new composition. In such cases where human intervention is involved, the work is typically recognized as having copyright protection.
For companies, it is advisable to establish an operational flow that preserves the generation process (such as prompt history and revision logs) to defend against imitation by other companies of AI-generated works created during business operations in Japan.
Attribution of Rights for AI-Generated Works in Japanese Business Operations
When an employee uses AI as part of their job to make a creative contribution and completes a work, to whom do the rights belong? According to Article 15 of the Japanese Copyright Act (Work for Hire), if a work is created by an employee in the course of their duties based on the initiative of a corporation and is published under the corporation’s name, the author is considered to be the corporation unless otherwise stipulated.
Even when AI is used, if human creative contribution is recognized, this framework of work for hire can be applied. However, for parts autonomously generated by AI (where there is no human creative contribution), copyright does not arise, and thus Article 15 does not apply. It is important to clearly state in internal regulations that the rights to AI-generated works belong to the company and to include provisions that encourage operations involving creative contributions.
The following table illustrates the elements and judgment points for recognizing the authorship of AI-generated works.
| Judgment Elements | Cases Where Authorship is Likely Recognized | Cases Where Authorship is Likely Denied |
| Nature of Prompts | Detailed and specific instructions, specification of expression details | Brief and abstract instructions, mere presentation of ideas |
| Trial and Error Process | Multiple generations and selections, repeated fine-tuning of instructions | Adoption of a single generation without modification |
| Human Processing | Direct additions, corrections, or color changes to the generated work | Use of AI output as-is without modification |
| Composition and Selection | Unique combination and arrangement of multiple generated works | Presentation of a single generated work only |
Risk 3: Legal Defense Against Hallucinations (AI Falsehoods) Under Japanese Law

One of the significant technical issues associated with generative AI is hallucination. This phenomenon occurs when AI outputs plausible falsehoods, which can lead to serious consequences in business use, such as defamation due to the spread of incorrect information or decision-making based on erroneous data.
The Nature of Hallucinations and Their Business Implications
Hallucinations occur because AI focuses on the probabilistic correctness of word sequences rather than verifying the truthfulness of information. This can lead to the fabrication of non-existent legal precedents or the reporting of fictitious cases as facts. In fact, there have been reports in the United States where legal documents created using generative AI included fictitious precedents, causing significant issues in court.
In the context of business activities, disseminating information that includes hallucinations can lead to the following legal risks under Japanese law. Firstly, if the content damages the reputation of a specific individual or corporation, it may result in liability for defamation damages. Secondly, making investment decisions based on incorrect market data or financial forecasts could constitute a breach of the duty of care expected of a prudent manager (duty of care of a good manager). Furthermore, presenting incorrect product specifications or legal interpretations to customers could lead to liability for breach of contract or obstruction of business.
Technical Measures: Utilizing RAG (Retrieval-Augmented Generation) in Japan
As a technical defense measure, the implementation of RAG (Retrieval-Augmented Generation) is effective. This technology allows AI to generate responses by referencing accurate documents and reliable databases prepared in advance by the company, rather than relying on unspecified external information. This approach clarifies the basis of responses and significantly reduces the risk of hallucinations.
However, the introduction of RAG does not eliminate the need for human oversight. There remains a possibility that AI may misinterpret reference materials or produce incomplete summaries.
Operational Measures: Mandating Fact-Checking
The most crucial aspect of internal regulations is to mandate the process of human “fact-checking” to ensure that AI outputs are not taken at face value. According to the “AI Business Operator Guidelines” by the Ministry of Economy, Trade and Industry and the Ministry of Internal Affairs and Communications, AI users in Japan should understand the accuracy of output results and make responsible usage decisions.
Reference: Ministry of Economy, Trade and Industry|AI Business Operator Guidelines
Specifically, a process should be established to always cross-check facts and figures presented by AI with primary sources such as official government websites, public statistics, and official releases. Particularly for documents intended for external communication, materials related to high-value transactions, and legally binding documents, a strict double-check system should be incorporated into the regulations, recognizing that these are AI-generated products.
Example of Internal Regulations for Managing Rights and Authenticity in AI
Based on the risk analysis conducted so far, we present examples of specific clauses that should be included in internal regulations. These regulations not only define “prohibited actions” but also outline the “usage process,” providing employees with guidelines to safely utilize AI.
Prohibited Actions When Entering Prompts (Copyright and Security Related)
Example Regulations:
- Users must not input the names of specific individuals, celebrities, or artists into generative AI prompts to instruct the AI to imitate their style or personalityistics.
- When inputting existing images, documents, source code, or other copyrighted works for adaptation or modification, it is only permissible if the company is the legitimate rights holder of the work or has obtained permission for AI use.
- It is strictly prohibited to input the company’s confidential information, personal information, or third-party unpublished copyrighted works into services that may be used as training data.
Usage Process for AI-Generated Content (Accuracy and Similarity Verification)
Example Regulations:
- When using AI-generated content for external materials, public relations, products, services, etc., the responsible person must verify that there is no similarity with existing copyrighted works using objective methods such as Google Image Search or commercial plagiarism detection tools.
- For information output by AI, including facts, figures, historical context, and legal interpretations, it must be cross-referenced with reliable primary sources (such as public data or official documents), and its authenticity must be confirmed by a human before use.
- When utilizing AI-generated content for important decision-making, the generation process (including prompts used and sources cross-referenced) must be recorded and approved by a superior or specialized department.
Reconfirmation Flow of Terms for Commercial Use
Example Regulations:
- When using generative AI services for commercial purposes or for external distribution, the responsible person must review the latest terms of use of the service and confirm in advance that there are no issues regarding the permissibility of commercial use and the attribution of rights to the generated content.
- If the service provider revises the terms of use, the content must be promptly scrutinized, and its consistency with internal regulations must be re-evaluated.
Conclusion: Consult a Japanese Attorney for Internal AI Regulations to Safely Harness AI
Generative AI, when properly controlled, can become an “excellent assistant” that enhances human creativity and elevates productivity to unprecedented levels. However, this assistant also carries the potential to infringe on others’ rights, such as copyright, and to fabricate plausible falsehoods that could jeopardize an organization. The risks detailed in this article are not intended to restrict the use of AI but to ensure a proper understanding of these risks and to establish appropriate “guardrails” so that you can fully accelerate its use.
Building AI governance within a company is no longer optional; it is a crucial requirement for sustainable management. Establishing internal regulations that reflect the Japanese Copyright Act, AI business guidelines, and the latest perspectives from the Agency for Cultural Affairs, along with creating a “Human-in-the-loop” system where humans hold ultimate responsibility, will determine the success or failure of businesses in the AI era.
Guidance on Measures by Our Firm
Monolith Law Office is a legal firm with extensive experience in both IT, particularly the Internet, and law. AI businesses come with numerous legal risks, making the support of attorneys well-versed in AI-related legal issues indispensable. Our firm provides advanced legal support for AI businesses utilizing technologies like ChatGPT, through a team of attorneys and engineers proficient in AI. We offer services such as contract drafting, examining the legality of business models, protecting intellectual property rights, addressing privacy concerns, and establishing internal AI regulations. Detailed information is provided in the article below.
Category: IT




















