Attorney Explains the Contents of the Ministry of Economy, Trade and Industry's 'AI Service Providers Guidelines

The “AI Business Operators Guidelines (Version 1.0)” compiled by the Ministry of Economy, Trade and Industry in 2024 (2024) serves as a crucial guideline indicating the direction of AI governance that companies should undertake. It was established against the backdrop of the rapid advancement of AI technology, where risk management and ethical challenges in its social implementation have become significant themes for both corporations and society at large. The purpose of these guidelines is to promote the proper use of AI and enhance societal trust. As soft law without legal binding force, the guidelines clarify the roles of each entity and emphasize a risk-based approach and consistency with international systems.
Furthermore, a distinctive feature of these guidelines is the adoption of the concept of “Agile Governance,” which is not a set of fixed rules but rather a flexible response to changes in the environment. This mechanism enables companies to continuously cycle through environmental and risk analysis, goal setting, operation, and evaluation in the development, provision, and use of AI, thereby achieving risk mitigation and the sustainable development of technology. Additionally, elements such as collaboration between multiple entities, proper data circulation, and active involvement of management are emphasized as pillars supporting effective governance.
Publication of the METI “AI Business Operators Guidelines” and Its Background

In April 2024, the Ministry of Economy, Trade and Industry (METI) formulated and published the “AI Business Operators Guidelines (Version 1.0).” These guidelines are positioned as non-binding principles to support the development and social implementation of AI technology. Therefore, unlike laws that impose obligations, they provide basic concepts, principles, and guidelines that can be used as a reference for businesses to practice. The structure consists of the main text, which outlines the concepts and principles, and an appendix that presents specific methods of practice.
Approach Based on the “Principles for a Human-Centric AI Society”
The “AI Business Operators Guidelines” are founded on the “Principles for a Human-Centric AI Society,” which were presented in March 2019. These principles advocate the fundamental philosophy that “humans should use AI, not be used by AI.” Specifically, AI should be utilized under human control as a means to augment human capabilities, and we must avoid situations where humans become overly dependent and manipulated by AI.
Furthermore, AI should not merely be a substitute for human labor; it should serve as an advanced and convenient tool that enhances human creativity and capabilities. Therefore, when using AI, users must make judicious decisions on how to utilize it and take responsibility for those decisions.
Consolidation and Review of Past Guidelines
The current guidelines have been formulated after consolidating and reviewing three major previous guidelines.
- “AI Development Guidelines” (2017): Explains the basic principles and points to consider in AI development.
- “AI Utilization Guidelines” (2019): Presents the basic principles of AI utilization and their explanations.
- “Governance Guidelines for the Implementation of AI Principles” (2021): Offers concrete action goals and practical examples to promote social implementation.
These have been utilized as guidelines that AI service providers should adhere to. The new “Guidelines for AI Service Providers” integrate these and include content that takes into account international trends and the emergence of new technologies.
The Formulation Process and the Path to Publication
The draft “AI Service Providers Guidelines” was first presented at the Cabinet Office’s AI Strategy Council (7th meeting) on December 21, 2023. Subsequently, it was officially published under the names of the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry on January 19, 2024, followed by a public comment period from January 20 to February 19. After this solicitation of opinions, the “AI Service Providers Guidelines (Version 1.0)” were finalized and published on April 19 of the same year.
The Significance of Guidelines
Unlike the EU, where AI regulatory laws have been established as so-called hard law, Japan’s “Guidelines for AI Service Providers” represent a form of soft law, which does not carry legal binding force. Instead, they provide fundamental principles and guidelines for actions that AI service providers should practice. As AI technology continues to advance, these guidelines are a crucial step in promoting proper utilization in society and building a trustworthy AI society. Service providers are expected to refer to these guidelines to develop and implement policies for their own AI development and utilization.
The Fundamental Concepts of the “AI Business Operators Guidelines” Under Japanese Law
The “AI Business Operators Guidelines” in Japan are structured around three fundamental concepts to promote the development of AI technology and its proper social implementation:
Supporting Voluntary Initiatives by Operators
The guidelines encourage a “risk-based approach,” where measures are adjusted according to the magnitude and likelihood of risks, providing direction for the measures that companies should undertake. This approach recommends flexible countermeasures based on the content and degree of risk, assuming that risks vary significantly depending on the use case and the nature of the AI model. Particularly since the risks associated with each use case do not necessarily align with the size of the company, it is essential to carefully assess the nature of these risks.
While considering legal regulations for high-risk categories, it is crucial to choose flexible regulatory methods for low-risk categories, such as private certification or self-declaration. Such an approach enables effective risk management while avoiding excessive regulation.
Harmonization with International Discussions
The market for AI technology extends beyond national borders, and it is vital for Japanese companies to maintain international competitiveness by ensuring compatibility with foreign systems and principles. The fundamental concepts and principles of these guidelines are based on international guidelines, such as the OECD’s AI Principles, and an annex also specifies the relationship with guidelines from other countries.
Given that AI models and services, including foundational models, operate in a cross-border market, it is crucial for Japanese companies to ensure interoperability with foreign systems to thrive globally. Furthermore, if any regulations are to be established, it is essential to avoid situations where Japanese companies bear an undue burden that undermines their competitiveness. Instead, effective systems should be implemented for both domestic and foreign companies, ensuring a level playing field.
Clarity for the Reader
The AI Business Operators Guidelines prioritize ease of understanding for the reader. A distinctive feature is the clear distinction of risks and response policies for different stakeholders, such as “AI developers,” “AI providers,” and “AI users.”
“Multi-Stakeholder” and “Living Document” Concepts
In addition, the guidelines are personalityized by the adoption of the “multi-stakeholder” and “Living Document” concepts. That is, they are formulated with an emphasis on effectiveness and legitimacy through repeated deliberations among multi-stakeholders, including educational and research institutions, civil society including consumers, and private companies. Furthermore, as a “Living Document,” they are planned to be updated as necessary, referencing the idea of agile governance for the continuous improvement of AI governance.
Distinguishing Roles and Scope of Application in the “AI Business Operators Guidelines” Under Japanese Law

One of the key features of the “AI Business Operators Guidelines” is the clear distinction and delineation of the entities subject to these guidelines based on their attributes, specifying the responsibilities each should undertake. The guidelines identify “AI developers,” “AI providers,” and “AI users” as the applicable entities, while “non-business users” and “data providers” are excluded from its scope.
Roles and Responsibilities of AI Developers, Providers, and Users
AI developers refer to entities responsible for data preprocessing, learning, and the development process of AI. This includes those who handle preprocessing of collected training data, generation of AI models, and verification of utility through trials. These processes form the foundation for ensuring the performance and reliability of AI, thus developers are expected to manage data appropriately and construct high-quality models.
AI providers are entities that handle the implementation and provision of AI systems. They integrate AI models into existing or newly established systems, coordinate with other systems, and then provide the process to users. They may also promote proper use of AI systems through awareness and operational support, or directly operate and provide AI services. This enables users to safely and effectively utilize AI systems.
AI users are entities that utilize AI systems or services. Users, informed by the provider’s cautions, are expected to operate AI systems properly and benefit from them. They also bear the responsibility to continue using the AI systems in accordance with their design intentions. Thus, the actions of users can impact the social acceptance and trust in AI systems.
Entities Excluded from Application
Conversely, “data providers” and “non-business users” are not subject to the guidelines. Data providers play the role of supplying training data for AI, which includes not only specific corporations or individuals but also data supplied through sensors or systems. On the other hand, non-business users refer to entities that enjoy the benefits of AI systems or services outside of business activities and may be affected by AI-based decisions depending on the situation.
For example, in AI services utilizing job seeker data, “AI developers” are those who develop the AI model, “AI providers” are those who offer the service, and companies using the service are “AI users.” Meanwhile, past job seekers would be “data providers,” and applicants affected by the service’s decisions would be “non-business users.”
Common Guidelines in the ‘AI Service Provider Guidelines’
The ‘AI Service Provider Guidelines’ present ten common principles that all entities should adhere to, regardless of their attributes. These guidelines are categorized into issues that entities should tackle on their own and those that should be addressed in collaboration with society, positioning them within the framework of AI governance.
Seven Principles for Entities to Address on Their Own
- Human-Centric: When utilizing AI, it is required to respect individual dignity, understand the accuracy and limitations of the output, and not use it for inappropriate purposes. Attention is also needed to prevent AI applications from skewing information and values, like filter bubbles, and limiting choices.
- Safety: Conduct risk analysis and implement appropriate measures to ensure that AI systems do not deviate from their intended use. Ensuring the accuracy and transparency of data and proper updating of AI models is also crucial.
- Fairness: Efforts must be made to ensure that AI outputs do not promote prejudice or discrimination. Human judgment should be involved in AI decisions, and consideration for potential biases is required.
- Privacy Protection: Adherence to international standards for personal data protection and compliance with international guidelines and norms regarding privacy is required. Ensuring interoperability in the cross-border transfer of personal data is also important.
- Security Assurance: Maintaining the confidentiality, integrity, and availability of AI systems and taking reasonable measures based on technological standards is necessary. Preparing for external attacks and responding to the latest risks is also required.
- Transparency: Making the learning process and reasoning of AI systems verifiable and providing explanations when necessary is required. However, disclosure of algorithms or source code is not demanded, and privacy and trade secrets are respected.
- Accountability: In the event of errors in AI outputs, it is necessary to accept feedback and conduct objective monitoring. Developing a policy for stakeholder response and regularly reporting progress is expected.
Three Principles to Address in Collaboration with Society
- Education & Literacy: Education is required to ensure that everyone involved with AI acquires sufficient AI literacy, and stakeholders are also to be informed about the personalityistics and risks of AI.
- Ensuring Fair Competition: An environment must be created where new businesses and services utilizing AI can compete fairly, aiming for sustainable economic growth and solving social issues.
- Innovation: Promoting international diversity and collaboration between industry, academia, and government, and ensuring interoperability between AI systems. Compliance with existing standards is recommended when they are available.
The Importance of Agile Governance and How to Practice It Under Japanese Law
The “AI Business Operators’ Guidelines” emphasize that in constructing AI governance, a one-size-fits-all approach is not suitable. Instead, a flexible response tailored to the size and nature of each company’s business is necessary. Rather than simply imitating the efforts of other companies, it is required that everyone from management to the field iteratively experiments to design and operate rules that are appropriate for their own company. Key to this process is the practice of “agile governance.”
What is Agile Governance?
Agile governance refers to an approach that, instead of setting fixed rules in advance, continuously cycles through the following steps to flexibly respond to changes in the external environment and risks:
- Environmental & Risk Analysis: Accurately grasp external conditions and technological risks.
- Goal Setting: Set appropriate goals according to the current situation.
- System Design: Design effective AI systems aimed at achieving the objectives.
- Operation: Operate the designed system and verify the results.
- Evaluation: Regularly evaluate the operational results and make adjustments as necessary.
By continuously cycling through such a process, it becomes possible to respond quickly and effectively to the rapidly changing technological environment and market trends.
The Three Pillars Supporting Agile Governance

“Agile Governance” indicates the evolving nature of governance that should progress alongside the advancement of AI technology. Companies are required to respond flexibly according to their business models and operating environments, and to work together from the management level to the field. Furthermore, by establishing collaboration among stakeholders and international risk management, it is possible to make AI governance more effective. The following elements are considered important in this regard:
- Ensuring Collaboration Among Multiple Entities: Since AI technology involves multiple parties from the perspective of value chains and risk chains, it is important to clarify each party’s role and responsibility and ensure collaboration. This allows for proper risk management at each stage of AI system development, provision, and use.
- Ensuring Proper Data Circulation: For the implementation of AI governance, the proper circulation of data is indispensable. Especially when considering data use across multiple countries, it is necessary to maintain data transparency and security while considering the regulations and risk management requirements of each country.
- Commitment of the Management: To realize effective AI governance, active involvement of the management is required. Specifically, this involves the formulation of governance strategies, the establishment of organizational structures, and the permeation of these as corporate culture. The commitment of the management acts as a driving force for the entire organization to work towards the same goals.
For more details on the AI governance that companies should establish as indicated in the “AI Business Operators’ Guidelines,” please refer to the following article.
Related Article: What is the AI Governance That Companies Should Undertake? Explaining Points Based on the ‘AI Business Operators’ Guidelines’
Guidance on Measures Provided by Our Firm
Monolith Law Office is a law firm with extensive experience in both IT, particularly the internet, and legal matters.
AI business comes with numerous legal risks, and the support of attorneys well-versed in legal issues related to AI is essential. Our firm provides sophisticated legal support for AI businesses, including those involving ChatGPT, through a team of AI-knowledgeable attorneys and engineers. We offer services such as contract drafting, legality reviews of business models, protection of intellectual property rights, and privacy compliance. Details are provided in the article below.
Areas of practice at Monolith Law Office: AI (ChatGPT, etc.) Legal Services
Category: IT




















