MONOLITH LAW OFFICE+81-3-6262-3248Weekdays 10:00-18:00 JST

MONOLITH LAW MAGAZINE

IT

What is AI Governance for Companies? Explaining Key Points Based on the "AI Service Providers' Guidelines"

IT

What is AI Governance for Companies? Explaining Key Points Based on the

On April 19, 2024, Japan’s Ministry of Internal Affairs and Communications, along with the Ministry of Economy, Trade and Industry, published the “AI Business Guidelines.” The purpose of these guidelines is to promote the safe use of AI and to advance its social implementation. The guidelines target businesses, public institutions, educational institutions, and organizations such as NPOs and NGOs involved in the development, provision, and use of AI.

The “AI Business Guidelines” emphasize the importance of establishing AI governance for each entity involved in development, provision, and use. So, what kind of initiatives are necessary for companies to establish AI governance?

This article will explain the key points of AI governance that companies should undertake based on the “AI Business Guidelines.”

Understanding the AI Business Guidelines in Japan

The AI Business Guidelines, established in 2024 by Japan’s Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry, aim to promote the safe and secure use of AI. They serve as a unified framework for AI governance within Japan, providing a set of principles to guide businesses in their AI-related activities.

Reference: AI Business Guidelines (Version 1.0) | Ministry of Economy, Trade and Industry

The guidelines were formulated by integrating and revising the previously established “AI Development Guidelines,” “AI Utilization Guidelines,” and “Governance Guidelines for the Implementation of AI Principles.” They are designed to help businesses engaged in AI to identify desirable practices for the safe and secure use of AI.

For a detailed explanation of the AI Business Guidelines, please refer to “A Lawyer’s Explanation of the Ministry of Economy, Trade and Industry’s AI Business Guidelines” on our website.

Related article: A Lawyer’s Explanation of the Ministry of Economy, Trade and Industry’s AI Business Guidelines

What is AI Governance?

AI governance, as defined in the AI Business Operators’ Guidelines, is “the design and operation of technical, organizational, and social systems by stakeholders with the aim of managing the risks arising from the utilization of AI at an acceptable level for stakeholders while maximizing the positive impact (benefits) it brings.”

Furthermore, regarding AI governance, the AI Business Operators’ Guidelines state that “in order to safely and securely utilize AI, it is important to establish AI governance that manages risks related to AI at an acceptable level for stakeholders and maximizes the benefits derived from it, through collaboration among all parties and practicing common guidelines across the entire value chain.”

In other words, the establishment of AI governance is crucial for each entity involved in the development, provision, and use of AI.

What Is Agile Governance in the Construction of AI Governance?

What Is Agile Governance in the Construction of AI Governance?

In the construction of AI governance, it is considered crucial to practice ‘Agile Governance.’

Agile Governance refers to an approach that, instead of setting fixed rules in advance, continuously cycles through ‘environmental and risk analysis,’ ‘goal setting,’ ‘system design,’ ‘operation,’ and ‘evaluation’ to flexibly respond to changes in the external environment and risks.

In AI-related businesses, the complexity and rapid pace of change mean that the goals are always shifting.

Therefore, in constructing AI governance, it is necessary to have flexible responses tailored to the size and nature of each company’s business, rather than fixing rules or procedures in advance. For this reason, the AI Business Operators Guidelines recommend the construction of governance using agile governance methods.

The term ‘agile’ in English means ‘quick’ or ‘nimble.’

The implementation of Agile Governance is described in the main text of the AI Business Operators Guidelines as follows:

  1. Conduct ‘environmental and risk analysis’ to understand the changes in the external environment, such as the social acceptance of AI systems and services, and to analyze the benefits and risks associated with their use.
  2. Decide whether to develop, provide, and use AI systems and services, and if so, set the ‘goals.’
  3. Design the ‘AI Management System’ to achieve the set AI governance goals and operate it. In doing so, companies should ensure transparency and accountability (such as fairness) to external stakeholders regarding the AI governance goals and their operational status.
  4. Continuously monitor whether the AI Management System is functioning effectively and conduct ‘evaluations.’
  5. Even after the start of the operation of AI systems and services, conduct ‘environmental and risk analysis’ again in light of changes in the external environment and revise the goals as necessary.

Specific action targets for practicing these steps 1-5 are listed in the AI Business Operators Guidelines Annex, so let’s check each item concretely.

Action Goals for Practicing Agile Governance

Let’s review the action goals for practicing agile governance based on the annex of the AI Service Provider Guidelines.

Environmental & Risk Analysis

To implement ‘Environmental & Risk Analysis’ as part of Agile Governance, the following three objectives are outlined in the annex of the AI Business Operators Guidelines:

  • Understanding of Benefits/Risks
  • Understanding of Social Acceptance of AI
  • Understanding of the Company’s AI Proficiency

Understanding Benefits/Risks

One of the action goals for conducting “Environmental & Risk Analysis” is the “Understanding of Benefits/Risks.”

The AI Business Operators Guidelines Annex explains as follows:

Action Goal 1-1 [Understanding Benefits/Risks]

Under the leadership of the management, each entity should clarify the purpose of AI development, provision, and use, and then concretely understand not only the benefits derived from AI but also the unintended risks in light of their business. They should report these to the management, share them within the management team, and update their understanding in a timely manner.

Source: AI Business Operators Guidelines Annex

The key points to note are as follows:

  • Clearly defining the purpose of AI development, provision, and use in terms of creating business value and solving social issues
  • Concretely understanding the benefits and risks in a way that is tied to one’s own business
  • Being aware of the ‘risks’ to avoid and issues that span multiple entities, and ensuring benefits and reducing risks across the entire value chain/risk chain
  • Establishing a system to quickly report/share with the management team

Understanding the Social Acceptance of AI

Another action goal for the implementation of “Environmental and Risk Analysis” is “Understanding the Social Acceptance of AI.”

The AI Business Operators’ Guidelines Annex explains as follows:

Action Goal 1-2 [Understanding the Social Acceptance of AI]

It is expected that under the leadership of the management, each entity will understand the current state of social acceptance based on stakeholders’ opinions before the full-scale development, provision, and use of AI. Furthermore, even after the full-scale development, provision, and use of AI systems and services, it is expected that stakeholders’ opinions will be reconfirmed in a timely manner, taking into account changes in the external environment.

Source: AI Business Operators’ Guidelines Annex

The points for implementation are as follows:

  • Identify stakeholders
  • Strive to understand social acceptance among stakeholders and develop, provide, and use AI
  • Even after the launch of AI systems and services, consider the rapidly changing external environment and reconfirm stakeholders’ opinions as necessary and in a timely manner

Understanding Your Company’s AI Proficiency

As a goal for conducting “Environmental and Risk Analysis,” the third point to consider is “Understanding Your Company’s AI Proficiency.”

The AI Business Operator Guidelines Annex explains as follows:

Action Goal 1-3 [Understanding Your Company’s AI Proficiency]

Under the leadership of the management, each entity, based on the implementation of Action Goals 1-1 and 1-2, and except in cases where the risk is deemed minor, should evaluate their company’s AI proficiency. This evaluation should be based on the extent of experience in developing, providing, and using AI systems and services, the number and experience of employees, including engineers involved in these processes, and the degree of literacy in AI technology and ethics among these employees. Companies are expected to reassess their AI proficiency periodically and, if possible, disclose the results to stakeholders within a reasonable scope. If a company decides not to evaluate its AI proficiency because the risk is considered minor, it is expected to disclose this fact and the reasons for it to stakeholders.

Source: AI Business Operator Guidelines Annex

The key points for implementation are as follows:

  • Assess the necessity of evaluating AI proficiency in light of each entity’s business domain and size.
  • If it is determined that an evaluation of AI proficiency is necessary, visualize the company’s ability to respond to AI risks and assess its AI proficiency.
  • If it is determined that an evaluation of AI proficiency is not necessary, disclose this fact and the reasons within a reasonable scope to stakeholders, if possible.

Goal Setting

Goal Setting

In the context of Agile Governance, the AI Business Operators Guidelines Annex suggests the following action goals for implementing ‘Goal Setting’:

Action Goal 2-1 [Setting AI Governance Goals]

Under the leadership of the management team, each entity should consider the potential benefits/risks of AI systems and services, societal acceptance related to the development, provision, and use of AI systems and services, and their own AI proficiency. While paying attention to the importance of the process leading to the setting of AI governance goals, entities should deliberate whether to establish their own AI governance goals (such as an AI policy) and proceed to set them. Furthermore, it is expected that the set goals will be disclosed to stakeholders. If an entity decides not to set AI governance goals due to the potential risks being minor, it is expected to disclose this fact and the reasons to stakeholders. If the ‘Common Guidelines’ provided in this annex are deemed to function sufficiently, they may be adopted as the entity’s AI governance goals in lieu of setting their own.

Even if an entity decides not to set goals, it is expected to understand the importance of this guideline and appropriately implement initiatives related to Action Goals 3 to 5.

Source: AI Business Operators Guidelines Annex

The key points for implementation are as follows:

  • Consider whether to establish ‘AI Governance Goals’ for each entity
  • If it is deemed necessary to set goals, proceed to establish them
  • If it is determined that setting goals is not necessary, disclose this fact and the reasons to stakeholders within a reasonable scope, if possible.

System Design (Construction of AI Management Systems)

In order to implement Agile Governance’s “System Design (Construction of AI Management Systems),” the following four objectives are outlined in the AI Business Operators Guidelines annex:

  • Mandatory evaluation of goals and deviations, and response to deviations
  • Enhancement of literacy in AI management personnel
  • Strengthening AI management through cooperation between various stakeholders and departments
  • Reducing the burden of incidents on users through prevention and early response

Mandatory Evaluation and Response to Goals and Deviations

One of the action goals for implementing “System Design (Construction of AI Management Systems)” is the “Mandatory Evaluation and Response to Goals and Deviations.”

The AI Business Operators Guidelines state the following:

Action Goal 3-1 [Mandatory Evaluation of Goals and Deviations, and Response to Deviations]

Under the leadership of the management layer, each entity is expected to identify deviations from their AI governance goals, assess the impact of these deviations, and, if risks are recognized, consider the magnitude, scope, and frequency of occurrence to determine the rationality of acceptance. If rationality is not recognized, a process to reconsider the approach to AI development, provision, and use is to be incorporated at appropriate stages, such as the overall AI management system, and during the design, development, pre-use, and post-use stages of AI systems and services. It is important for the management layer to establish basic policies for the reconsideration process, while the operational layer should concretize this process. Furthermore, it is expected that individuals not directly involved in the development, provision, and use of the AI in question will participate in the evaluation of deviations from AI governance goals. However, it is not appropriate to arbitrarily prohibit the development, provision, and use of AI solely on the basis that there are deviations. Therefore, the evaluation of deviations is merely a step to assess risks and should only serve as a trigger for improvement.

Reference: AI Business Operators Guidelines Annex

Key points for practice are as follows:

  • Identify and evaluate the extent of deviation from the current AI systems/services and “AI Governance Goals”
  • If risks are recognized in the use of AI systems/services, determine the rationality of their acceptance
  • If rationality is not recognized, reconsider the approach to development, provision, and use, and incorporate the process for reconsideration at appropriate stages of development, provision, and use, as well as within the decision-making processes of the organization
  • The management layer should take leadership in these actions, be responsible for the decisions made, and ensure that the operational layer concretizes and continuously implements them
  • To foster awareness within each entity, share the decided deviation evaluation items internally

Enhancing AI Management Literacy Among Personnel

The second action goal for implementing “System Design (construction of AI management systems)” is “enhancing AI management literacy among personnel.”

The AI Business Operators Guidelines state the following:

Action Goal 3-2 [Enhancing Literacy of Personnel in AI Management Systems]

Under the leadership of the management layer, each entity is expected to strategically improve AI literacy to operate AI management systems appropriately. This may include utilizing external educational materials. For example, officers, management teams, and personnel responsible for the legal and ethical aspects of AI systems and services should receive education to enhance general literacy regarding AI ethics and trustworthiness. Project members involved in the development, provision, and use of AI systems and services should receive training not only in AI ethics but also in AI technology, including generative AI. Education on the positioning and importance of AI management systems should be provided to all.

Reference: Annex to AI Business Operators Guidelines

Key points for practical implementation include:

  • Using training and materials suitable for the roles and responsibilities to improve AI literacy
  • Utilizing appropriate training and materials according to the roles each individual is expected to play
  • Implementing measures such as requiring all employees to take courses on AI ethics, which is of particular importance

Strengthening AI Management through Inter-Entity and Inter-Departmental Cooperation

The third action goal for the implementation of “System Design (construction of AI management systems)” is “Strengthening AI Management through Inter-Entity and Inter-Departmental Cooperation.”

The AI Business Operators Guidelines state the following:

Action Goal 3-3 [Strengthening AI Management through Inter-Entity and Inter-Departmental Cooperation]

Entities, except when handling all processes from the preparation of datasets used for learning to the development, provision, and use of AI systems and services within their own department, are expected to clarify operational challenges and necessary information for resolving such challenges that cannot be sufficiently addressed by their company or department alone. Under the leadership of the management and while paying attention to trade secrets, they are expected to share this information within a possible and reasonable scope, ensuring fair competition. In doing so, it is expected that entities will agree in advance on the scope of information disclosure and consider entering into confidentiality agreements to facilitate the necessary exchange of information.

Reference: Annex to the AI Business Operators Guidelines

Key points for practice include the following:

  • Identifying operational challenges and necessary information for resolving issues related to AI systems and services that cannot be solved by entities alone
  • Sharing within a possible and reasonable scope between entities, while paying attention to intellectual property rights and privacy

※Note that these practices are based on the premise of ensuring fair competition, taking into account various laws and regulations, each entity’s AI policies, trade secrets, and data provided on a limited basis.

Reducing User Burden Related to Incidents Through Prevention and Early Response

As a behavioral goal for the implementation of “System Design (Construction of AI Management Systems),” the fourth point is “Reducing User Burden Related to Incidents Through Prevention and Early Response.”

The AI Business Operators Guidelines state the following:

Behavioral Goal 3-4 [Reducing the Burden Related to Incidents for AI Users and Non-Business Users Through Prevention and Early Response]

It is expected that each entity, under the leadership of the management layer, will reduce the burden related to incidents for AI users and non-business users through the prevention of incidents and early response.

Reference: AI Business Operators Guidelines Annex
  • Prevent system failures, information leaks, and the occurrence of complaints, and respond promptly if they occur
  • Establish a system for incident prevention and early response

Operation

Operation

To implement the “operation” of Agile Governance, the AI Business Operators Guidelines Annex suggests the following three action goals:

  • Ensuring a state of explainability for the operation of AI management systems
  • Ensuring a state of explainability for the operation of individual AI systems
  • Considering proactive disclosure of the status of AI governance practices

Ensuring an Explainable State of AI Management System Operations

One of the operational goals for the implementation of “operations” is “Ensuring an explainable state of AI management system operations.”

The AI Business Operators Guidelines state the following:

Operational Goal 4-1 [Ensuring an Explainable State of AI Management System Operations]

Under the leadership of the management layer, each entity is expected to ensure transparency and accountability to relevant stakeholders regarding the operational status of the AI management system. For example, this includes recording the implementation status of the divergence assessment process as outlined in Operational Goal 3-1.

Source: Annex to the AI Business Operators Guidelines

As a key point of operation, it is important to maintain the AI management system’s operational status in an explainable state to relevant stakeholders within an appropriate and reasonable scope.

Ensuring an Explainable State for Individual AI System Operations

The second action goal for the implementation of “operations” is “Ensuring an explainable state for AI management system operations.”

The AI Business Operators Guidelines state the following:

Action Goal 4-2 [Ensuring an Explainable State for Individual AI System Operations]

Under the leadership of the management layer, each entity is expected to continuously conduct a divergence assessment for the trial and full-scale operation of each AI system/service. This involves monitoring the status of both trial and full-scale operations, cycling through the PDCA process, and recording the results. Entities developing AI systems are expected to support the monitoring by entities that provide and use the AI systems.

Reference: AI Business Operators Guidelines Annex

Key points for practice include:

  • Monitor the status of each entity’s AI operations, cycle through the PDCA process, and record the results.
  • In cases where it is difficult for an entity to respond on its own, collaborate among entities.

Considering Proactive Disclosure of AI Governance Practices

The third action goal for the implementation of “operations” is “Ensuring an explainable state of AI management system operations and considering the proactive disclosure of AI governance practices.”

The AI Business Operators Guidelines state the following:

Action Goal 4-3 [Considering Proactive Disclosure of AI Governance Practices]

Entities are expected to consider disclosing information related to the setting of AI governance goals, the establishment and operation of AI management systems, and other relevant details as non-financial information within the Corporate Governance Code. This expectation applies not only to listed companies but also to non-listed entities that are encouraged to consider disclosing information about their AI governance activities. Furthermore, if the decision is made not to disclose, entities are expected to publicly share this fact along with the reasons to their stakeholders.

Reference: AI Business Operators Guidelines Annex

Key points for practice include:

  • Consider ensuring transparency of information related to AI governance, from the company’s fundamental approach to AI to the establishment and operation of AI management systems.
  • When disclosing information on AI governance, consider positioning it as non-financial information within the Corporate Governance Code.
  • If the decision is made not to disclose information on AI governance, publicly share this fact along with the reasons to stakeholders.

Evaluation

Evaluation

To implement the ‘Evaluation’ aspect of Agile Governance, the AI Business Operators Guidelines Annex lists the following two objectives:

  • Verification of the AI Management System’s functionality
  • Consideration of external stakeholders’ opinions

Verification of the AI Management System’s Functionality

The first objective for carrying out ‘Evaluation’ is the ‘Verification of the AI Management System’s functionality’.

The AI Business Operators Guidelines state the following:

Action Goal 5-1 [Verification of the AI Management System’s Functionality]:

It is expected that under the leadership of the management, individuals with relevant expertise independent from the design and operation of the AI Management System will evaluate and continuously improve the system in light of the AI Governance Goals. This involves assessing whether the AI Management System is appropriately designed and operated, i.e., through the practice of Action Goals 3 and 4, whether the system is functioning properly towards achieving the AI Governance Goals.

Quote: AI Business Operators Guidelines

Key points for practice include:

  • Management should explicitly state the focus points for evaluation aimed at continuous improvement in their own words
  • Assign individuals with relevant expertise independent from the design and operation of the AI Management System
  • Monitor whether the AI Management System is functioning properly
  • Implement continuous improvements based on the results of monitoring

Consideration of External Stakeholders’ Opinions

The second objective for carrying out ‘Evaluation’ is the ‘Consideration of external stakeholders’ opinions’.

The AI Business Operators Guidelines state the following:

Action Goal 5-2 [Consideration of Stakeholders’ Opinions]:

Under the leadership of the management, it is expected that each entity will consider seeking opinions on the AI Management System and its operation from stakeholders. Furthermore, if it is decided not to implement the content of such opinions, it is expected that the reasons for this decision will be explained to the stakeholders.

Quote: AI Business Operators Guidelines Annex

Key points for practice include:

  • Consider seeking opinions on the AI Management System and its operation from stakeholders
  • If the content of such opinions is not implemented, explain the reasons to the stakeholders

Reanalysis of Environment & Risk

One of the action goals for implementing the “Reanalysis of Environment & Risk” under Agile Governance is the “timely reimplementation of Action Goals 1-1 to 1-3”.

The AI Business Operators Guidelines state the following:

Action Goal 6-1 [Timely Reimplementation of Action Goals 1-1 to 1-3]:

Under the leadership of the management layer, each entity is expected to quickly grasp changes in the external environment, such as the emergence of new technologies and changes in social systems like regulations, and to timely reevaluate, update understanding, and acquire new perspectives. Based on this, improvements or reconstruction of AI systems, as well as enhancements in operations, are anticipated. Moreover, when implementing Action Goal 5-2, it is expected that not only the existing AI management systems and their operations will be considered but also that opinions from external sources will be sought for a review of the entire AI governance, including environmental and risk analysis, in line with the Agile Governance emphasized in these guidelines.

Source: Annex to the AI Business Operators Guidelines

Key points for practice include:

  • Grasping changes in the external environment, such as the emergence of new technologies, innovations related to AI, and changes in social systems like regulations
  • Timely reevaluation, updating understanding, and acquiring new perspectives, followed by corresponding improvements, reconstruction, or changes in the operation of AI systems
  • Embedding the concept of AI governance as part of the organization’s culture

Key Considerations When Establishing AI Governance Under Japanese Guidelines

The above represents the specific methods of AI governance based on the Agile Governance approach, as outlined in the “AI Business Operators’ Guidelines.”

However, these are created as common practices for all AI business operators, and it is necessary to consider individually what business model your company falls under—whether it’s development, provision, or use—and what kind of AI systems or services are being offered.

What kind of organization and system your company needs, what actions and documentation are required, vary from case to case. Moreover, it is essential to conduct these with an eye on how they will lead to future business benefits and the avoidance of disadvantages.

Additionally, various legal risks lurk within the AI business. Therefore, we recommend obtaining support from an attorney from the initial stages of implementing measures.

Conclusion: Optimize for the AI Business Operator Guidelines with the Help of an Attorney

This article has explained how to establish AI governance in accordance with the AI Business Operator Guidelines.

The AI Business Operator Guidelines provide detailed regulations on methods of AI governance, requiring managers to create internal organizations and prepare documentation as part of their governance responsibilities.

In the rapidly changing AI business landscape, optimizing for the complex and newly established guidelines can be extremely challenging. We recommend building the most suitable AI governance for your company with early advice from experts who are well-versed in both IT fields, including AI, and Japanese law.

Guidance on Measures Provided by Our Firm

Monolith Law Office is a law firm with extensive experience in both IT, particularly the internet, and legal matters.

The AI business is accompanied by many legal risks, and the support of attorneys well-versed in legal issues related to AI is essential. Our firm provides advanced legal support for AI businesses, including those involving ChatGPT, through a team of AI-knowledgeable attorneys and engineers. Our services include contract drafting, legality reviews of business models, intellectual property protection, and privacy compliance. Details are provided in the article below.

Areas of practice at Monolith Law Office: AI (ChatGPT, etc.) Legal Affairs

Managing Attorney: Toki Kawase

The Editor in Chief: Managing Attorney: Toki Kawase

An expert in IT-related legal affairs in Japan who established MONOLITH LAW OFFICE and serves as its managing attorney. Formerly an IT engineer, he has been involved in the management of IT companies. Served as legal counsel to more than 100 companies, ranging from top-tier organizations to seed-stage Startups.

Return to Top