MONOLITH LAW OFFICE+81-3-6262-3248Weekdays 10:00-18:00 JST

MONOLITH LAW MAGAZINE

IT

5 Steps to Successful AI Implementation: How to Develop 'Living AI Internal Regulations' and Advance In-House Training

IT

5 Steps to Successful AI Implementation: How to Develop 'Living AI Internal Regulations' and Advance In-House Training

The technological advancements in generative AI hold the potential to fundamentally transform traditional business processes. However, many organizations face challenges not just in overcoming technical barriers, but more critically, in managing legal and ethical risks and ensuring the integration of AI within their organizations. Simply distributing the latest tools company-wide and leaving their use to the discretion of individual departments—a “hands-off” approach—can lead to serious incidents such as information leaks and rights infringements. Ultimately, this approach may also stagnate the overall productivity of the organization.

To sustainably reap the benefits of AI, it is essential to establish “living AI internal regulations” that harmonize technological convenience with legal safety at a high level. Additionally, a phased educational process based on these regulations is crucial. This article explains five specific steps to achieve effective AI implementation, grounded in the latest Japanese legal regulations and government guidelines.

Risks of Unregulated AI Implementation in Japan

When considering the implementation of generative AI, organizations often face the issue of “shadow IT,” where usage at the operational level precedes the establishment of a clear organizational policy. Due to the high convenience of these tools, employees frequently use personal accounts to incorporate them into their work based on individual judgment. This approach of “just try it out” may seem to accelerate implementation in the short term, but in reality, it fosters unregulated usage and accumulates significant management risks.

Specifically, there are concerns about the inadvertent input of confidential information leading to the leakage of trade secrets, the generation of content that infringes on others’ copyrights, and the dissemination of inaccurate information externally. The “AI Business Operator Guidelines” published by the Ministry of Economy, Trade and Industry and the Ministry of Internal Affairs and Communications in April of Reiwa 6 (2024) also require businesses utilizing AI to appropriately assess these risks and establish necessary governance. Without a solid foundation of clear rules, employees cannot accurately determine what is permissible and what is prohibited, leading them to either hesitate in creatively utilizing AI or unknowingly commit serious errors.

Reference: Ministry of Internal Affairs and Communications|AI Business Operator Guidelines Page

Even if unregulated usage temporarily boosts organizational productivity, the costs associated with recovering from legal troubles and the loss of social trust are immeasurable. Therefore, in the initial stages of AI implementation, clearly defining a framework for safe usage does not restrict operational freedom but rather ensures an environment where technology can be utilized with confidence. The purpose of this article is to clarify the specific steps to build this “safe foundation” and establish an operational system that prevents it from becoming a mere formality.

 

Step 1: Articulating the Purpose of AI Implementation and Redefining Challenges

Steps 1-2: Clarifying Objectives and Selecting Optimal Services

The first critical juncture in determining the success of AI implementation lies in whether AI can be redefined as a means to solve organizational challenges, rather than making the technology’s introduction an end in itself. If the purpose of implementing AI remains vague, the internal regulations formulated may become abstract and ineffective, resulting in “dead rules” that lack practical applicability in the field.

The initial step should be a thorough articulation of the purpose for which AI is being introduced, specifically identifying which department’s challenges it aims to address.

  • Accounting Department: Reducing administrative burdens. Security concerning external data transmission is the primary issue.
  • Development Department: Automating code generation. Key concerns include Article 30-4 of the Japanese Copyright Act, OSS licenses, and vulnerabilities.
  • Sales and Public Relations: Creating materials and generating FAQs. The focus is on information accuracy and the risk of rights infringement.

By concretizing the challenges each department needs to address, the direction for rules optimized for each specific operation becomes clearer.

Step 2: Selecting the Optimal Service and Reviewing Terms of Use Under Japanese Law

Next, select AI services that align with the defined objectives. The current market offers a wide range of services, from general-purpose generative AI like ChatGPT to AI specialized in specific domains such as legal, accounting, and programming. While general-purpose AI offers flexibility to handle a broad range of tasks, it may fall short in terms of accuracy in specialized fields and compliance with specific regulations compared to specialized AI.

When selecting services, it is crucial to ensure they comply with the organization’s security policies. Additionally, the potential for utilizing provided APIs and the presence of data protection features in enterprise plans are important considerations. Since there are often fundamental differences between free versions for individuals and paid versions for businesses regarding the use of input data for learning (and the ability to opt-out), contracting a business plan should be a prerequisite for company-wide implementation.

In selecting AI services, the most critical yet often overlooked aspect is the review of the terms of use provided by each vendor. Compared to general SaaS products, AI services tend to have complex and frequently changing conditions regarding data ownership and learning usage. The following five points are essential to verify before entering into a contract to minimize legal risks.

CheckpointsDetails to ConfirmLegal and Practical Significance
Scope of ProhibitionsWhether generating advice in specific fields (medical, legal, financial, etc.) is prohibitedTo avoid account suspension or liability risks due to terms violations
Commercial Use PermissionWhether commercial use of generated content is explicitly permitted and if there are differences between plansTo ensure stability of rights in utilizing for revenue-generating activities
Intellectual Property RightsWhether it is clearly stated that the copyright of generated content belongs to the userTo protect as one’s own creation and enable secondary use
Use for Machine LearningWhether it is possible to opt-out of having input data used for model retrainingTo maintain trade secrets and prevent leakage of confidential information
General Clauses and Governing LawWhat the jurisdiction and scope of indemnity are in case of disputes, and which laws applyTo ensure predictability and manage costs in the event of a dispute

Particularly, clauses regarding the use for machine learning are directly linked to the protection of trade secrets in Japan. If input data is incorporated into the vendor’s learning data, there is a risk that your company’s confidential information could be output as someone else’s response in the future. To be protected as a trade secret under the Japanese Unfair Competition Prevention Act, “secrecy management” must be recognized, but an environment where data is indiscriminately used for AI learning can be a fatal factor in losing this secrecy management.

Additionally, attention must be paid to the governing law. Many services provided by U.S. vendors are governed by laws such as those of the State of Delaware, which may limit dispute resolution to overseas venues. This could effectively mean relinquishing the exercise of rights for domestic companies. Therefore, when using such services for critical operations, it is advisable to consider negotiating contracts that adopt Japanese law as the governing law or designate Japanese courts as the agreed jurisdiction.

Step 3: Verification Cycle Through a Small-Scale Start (PDCA) in Japan

Rushing to implement rules across the entire organization without sufficient verification can lead to confusion on the ground and hasten the process of rules becoming mere formalities. It is recommended to conduct a trial implementation through a “small-scale start” targeting specific project teams or departments with high IT literacy and a clear awareness of issues.

The biggest drawback of a simultaneous implementation is the imposition of “one-size-fits-all rules” that ignore the diverse operational realities within the organization. Applying overly strict rules across the company can compromise convenience on the ground, while overly lenient rules fail to control risks. By setting a trial implementation period, you can accumulate data based on real experiences to identify what risks become apparent in actual workflows and what guidelines are necessary.

This period should function as a “sandbox” where failures are permissible. Employees should be encouraged to use AI, document the prompts they input, the outputs they receive, and any concerns that arise (such as misinformation due to hallucinations, inappropriate expressions, or signs of copyright infringement). A system should be established for legal and information systems personnel to review these records.

To maximize the effectiveness of a small-scale start, it is beneficial to run a verification workflow consisting of the following six steps:

  1. Identify the target operations and formulate an “initial guideline” specific to those operations. This initial draft should concisely summarize the minimum prohibitions (such as prohibiting the input of confidential information) and recommended usage methods.
  2. Conduct introductory training for selected members and allow them to use AI in actual operations.
  3. Collect feedback from the field through regular hearings, focusing on raw insights such as whether the rules are hindering operations or if there were any unexpected risks encountered.
  4. Based on the collected issues, readjust the balance between risk and convenience, and improve the guidelines.
  5. Reapply the improved guidelines and continue to refine them further.
  6. Develop “standard regulations” for company-wide deployment based on the insights gained through this verification process.

By running this PDCA cycle, it is possible to elevate the rules from being top-down impositions to “living rules” that the field understands and can adhere to.

Step 4: Expanding Implementation Scope and Conducting Individual Risk Assessments by Department in Japan

Steps 4-5: Expanding Implementation Scope and Regular Reviews

When expanding the scope based on insights gained from the trial implementation, conducting individual risk assessments for each department is essential. Uniform regulations alone cannot address the differences in assets that need protection.

  • Human Resources Department: Emphasizes the prohibition of entering personally identifiable information and the transparency of automated decision-making, in accordance with the Japanese Personal Information Protection Act.
  • Research and Development Department: Prioritizes technical and contractual protections to ensure ideas do not become learning data for other companies.
  • Public Relations and Marketing: Focuses on measures against trademark and design similarities and the risk of public backlash.

By organizing these elements and clearly categorizing tasks into “permitted,” “conditionally permitted,” and “prohibited,” employees can confidently determine the extent to which they can utilize AI in their work.

Step 5: Organizational Design for Systematic Regular Review

The environment surrounding generative AI is rapidly evolving in terms of technology, legal regulations, and societal ethics. Consequently, internal policies that are established may become outdated within a few months. Incorporating a mechanism for regular review into the operational framework is key to achieving genuine governance under Japanese law.

Specifically, on a quarterly to semi-annual basis, the results of operational monitoring should be reviewed, and the validity of the policies should be re-evaluated. During the review, it is crucial to examine whether there are any discrepancies with the latest guidelines from the Japanese government (such as the Cabinet Office and the Ministry of Economy, Trade and Industry), whether there have been any new court rulings regarding the copyright of AI-generated content, or whether there have been any changes to the terms of the AI services being used.

Moreover, regular reviews should not be confined to the legal and IT departments alone. Establishing a review meeting that includes representatives from the field will help capture practical issues and prevent the policies from becoming obsolete. When policies are updated as a result of the review, it is important to promptly communicate the changes and their reasons to all employees and conduct re-education if necessary. By continuing this cycle, the AI literacy of the entire organization will remain up-to-date.

Key Points of AI Internal Regulations Supporting On-Site Operations in Japan

Effective AI internal regulations should not merely be a list of prohibitions. They must serve as a guide for employees when in doubt and function as a legal shield for the organization. Here, we explain the four essential pillars that should be included.

Clarifying Objectives and Scope for Consensus Building

At the beginning of the regulations, clearly state the “objectives” of why the company is introducing AI and what value it aims to create. This serves as a positive message encouraging employees to utilize AI while also providing a moral foundation to prevent inappropriate use.

The scope of application should be clearly defined to include not only full-time employees but also contract employees, temporary staff, and even external contractors. Particularly when external contractors use AI to deliver work products, it is necessary to organize at the contract level where the responsibility for quality control and rights management lies.

Mandating Education and Training to Strengthen Legal Defense

Simply distributing or posting internal regulations does not suffice to fulfill legal supervisory responsibilities. Establish a system where education and training on AI usage are mandated, granting usage rights only to employees who have completed the training. Proper management of training records is essential as evidence to argue that the company provided “appropriate supervision and education” in case an employee deviates from the rules and causes an incident.

The training should cover not only technical aspects like prompt engineering but also explain the nature of hallucinations, considerations under Japanese copyright law, and specific examples of confidentiality obligations, using real-life cases.

Specifying Permissible Services and Eradicating Shadow IT

Clearly state that only “approved services,” which the company has verified for safety and entered into appropriate contracts with, can be used for business purposes. This systematically eliminates shadow IT using personal accounts. Additionally, by clarifying the application and approval process for using new services or plugins, the company can flexibly accommodate on-site needs while maintaining controlled implementation. Here, it is most prudent to prohibit the use of free versions in principle and limit usage to corporate plans where data is not used for learning, as a risk hedge.

Proper Exercise of Monitoring and Audit Authority

To ensure appropriate operations within the organization, the regulations should specify that the company has the authority to record and review employee input prompts and output results, conducting audits as necessary. This not only acts as a psychological deterrent against inappropriate use but also plays a crucial role in post-incident cause investigation and damage control in the event of information leakage. However, when conducting monitoring, it is important to notify employees in advance of the purpose and scope to ensure transparency, which is vital for maintaining trust with employees.

It is also important to consider how to integrate these items with existing work rules, confidentiality agreements, or IT usage policies. While establishing independent AI-specific regulations, ensure legal consistency by linking them so that serious violations can be subject to disciplinary provisions in the work rules.

Conclusion: Establishing Internal AI Regulations to Maximize AI’s True Potential

To transform AI into a powerful asset for an organization, the most crucial factors are not the budget for implementing the latest models, but rather the “human management skills” and “organizational governance” necessary to utilize them effectively.

The five steps explained in this article are all essential elements for creating “living rules” that permeate the workplace. While a hands-off implementation might lead to temporary efficiency gains, sustainable growth is supported by a sincere commitment to continuously balancing legal safety and convenience under Japanese law.

Guidance on Measures by Our Firm

Monolith Law Office is a legal firm with extensive experience in both IT, particularly the Internet, and law. The AI business involves numerous legal risks, making the support of attorneys well-versed in AI-related legal issues indispensable. Our firm provides advanced legal support for AI businesses utilizing technologies like ChatGPT, through a team of attorneys and engineers proficient in AI. Our services include drafting contracts, assessing the legality of business models, protecting intellectual property rights, addressing privacy concerns, and establishing internal AI regulations. Detailed information is provided in the article below.

Managing Attorney: Toki Kawase

The Editor in Chief: Managing Attorney: Toki Kawase

An expert in IT-related legal affairs in Japan who established MONOLITH LAW OFFICE and serves as its managing attorney. Formerly an IT engineer, he has been involved in the management of IT companies. Served as legal counsel to more than 100 companies, ranging from top-tier organizations to seed-stage Startups.

Return to Top