MONOLITH LAW OFFICE+81-3-6262-3248Weekdays 10:00-18:00 JST

MONOLITH LAW MAGAZINE

IT

Risks of Using ChatGPT for Business Purposes: Explaining the Legal Issues

IT

Risks of Using ChatGPT for Business Purposes: Explaining the Legal Issues

Since its release, ChatGPT has garnered significant attention worldwide for its utility. However, it has also become known that various risks are associated with its use. It is true that many business leaders feel uncertain about the risks of using ChatGPT in business and how to hedge against them, as legal frameworks are still being developed.

This article will explain the legal risks and measures to consider when using ChatGPT for business purposes.

Four Risks of Utilizing ChatGPT for Business Purposes

In the corporate world, the adoption of natural language generation AI like ChatGPT is on the rise, with a 13.5% implementation rate among companies in the fiscal year 2022, and this figure reaches 24.0% when including those planning to adopt it. (Source: Japanese Ministry of Internal Affairs and Communications | Reiwa 5 (2023) White Paper on Information and Communications in Japan, Survey on Telecommunications Usage “Status of System and Service Implementation of IoT, AI, etc., in Companies” [Image][ja])

Indeed, AI technologies, including ChatGPT, are effective in enhancing corporate operational efficiency and are expected to open up many more business opportunities. However, they also come with a host of legal risks. Therefore, it is necessary to fully understand these risks before deciding to use them for business purposes.

For example, experts are sounding the alarm about potential risks associated with AI, such as copyright issues, the spread of misinformation, leakage of confidential information, privacy concerns, and misuse for cyber-attacks.

This chapter will explain the four risks associated with using ChatGPT for business purposes.

The Risk of Information Leakage Due to Learning Functions

While ChatGPT is convenient, it is an AI chatbot generated by learning from various data on the internet. Without taking any measures, there is a risk that the information inputted could be used for learning and potentially leaked.

According to OpenAI’s Data Usage Policy[ja], unless users go through the ‘API’ or apply for ‘opt-out’, the data inputted into ChatGPT may be collected and used (learned) by OpenAI.

It is crucial to be cautious and not input sensitive information such as personal or confidential information without taking protective measures. Even if personal information is accidentally inputted into ChatGPT, a warning message will be displayed, and ChatGPT is designed not to save or track personal information, nor can it output it in the chat.

However, there have been incidents in the past where bugs in the systems operated by OpenAI, the company behind ChatGPT, led to the leakage of users’ registered personal information.

Related article: What is the Risk of Information Leakage with ChatGPT? Introducing 4 Measures to Take[ja]

The Risk of Unreliable Information

The Risk of Unreliable Information

Since the implementation of ChatGPT’s web browsing feature in May 2023, ChatGPT has been equipped with a search function, allowing it to gather the latest information and provide answers based on those results.

However, while ChatGPT outputs information as if it were true when responding, the reliability of this information is not guaranteed. The responses generated by ChatGPT are not based on the accuracy of the information in the learning data; instead, they are produced as the most probable (or most likely) text. Therefore, it is essential to fact-check ChatGPT’s answers. If a company were to disseminate false information based on ChatGPT’s responses, it could damage the company’s credibility.

On April 3, 2023, the University of Tokyo expressed concerns about the technical challenges associated with ChatGPT and its potential impact on society, issuing the following statement:

“The principle behind ChatGPT involves creating plausible text probabilistically through machine learning and reinforcement learning of a vast amount of existing text and content. Consequently, the content generated may contain falsehoods. It’s akin to conversing with a very articulate ‘pretender.’ (However, the latest version, GPT-4, has significantly improved accuracy and serves as an excellent conversational partner.) Therefore, to master ChatGPT, one needs specialized knowledge to critically review and appropriately modify the responses. Moreover, ChatGPT cannot analyze or describe new insights that are not part of the existing information. In other words, the advent of ChatGPT does not mean that humans can neglect their studies or research. However, if individuals with cultural and specialized knowledge critically analyze the responses and use them adeptly, it is possible to significantly improve the efficiency of routine tasks.”

Source: The University of Tokyo | “On Generative AI (ChatGPT, BingAI, Bard, Midjourney, Stable Diffusion, etc.)[ja]

Legal Risks such as Copyright Infringement and Privacy Violation

The assessment of copyright infringement in ChatGPT is divided between the “AI development and learning phase” and the “generation and usage phase.” Since the acts of using copyrighted works differ at each stage, the applicable articles of the copyright law also vary. Therefore, it is necessary to consider both phases separately.

Reference: Agency for Cultural Affairs | Reiwa 5 (2023) Copyright Seminar “AI and Copyright”[ja]

In the amended Copyright Law enacted in January 2019, a new provision for the “AI development and learning phase” was established under Article 30-4, which is a limitation of rights provision (an exception where permission is not required). Uses of copyrighted works that do not aim to enjoy the thoughts or emotions expressed in the works, such as information analysis for AI development, can generally be conducted without the copyright holder’s permission.

On the other hand, if the output generated by ChatGPT is found to have similarities or dependencies (modifications) to copyrighted works, it may constitute copyright infringement. Therefore, before publishing, it is crucial to verify the rights holders of the information referenced by ChatGPT and to ensure that there is no content similar to what ChatGPT has created. When quoting copyrighted works, it is necessary to clearly indicate the source (limitation of rights provision), and when reproducing works, it is essential to obtain the copyright holder’s permission and handle it appropriately.

According to OpenAI’s terms of use, content created by ChatGPT is available for commercial use. However, if it is difficult to determine whether content created using ChatGPT infringes on copyright, it is recommended to consult with a specialist.

If accused of copyright infringement by the copyright holder, one may face civil liabilities (such as damages, consolation fees, injunctions against use, and restoration of reputation) or criminal liabilities (prosecutable offenses). In the case of corporations, dual penalty provisions may apply, imposing sanctions on both individuals and legal entities, potentially resulting in significant damage.

Regarding personal information and privacy, it is also necessary to exercise caution and avoid inputting such data into ChatGPT. As mentioned above, even if personal information is accidentally entered into ChatGPT, the system is designed not to store or track personal information and cannot output it in the chat. However, this is OpenAI’s policy, and other platforms or services may differ.

For more information on the risk of personal information leakage, please read the following article.

Related article: The Risk of Corporate Personal Information Leakage and Compensation for Damages[ja]

The Risk of Unintentionally Creating Harmful Content

The Risk of Unintentionally Creating Harmful Content

Depending on the data ChatGPT has been trained on and the prompts provided, there is a risk of generating harmful content. If content created by ChatGPT is published without proper review, it could damage a company’s reputation or brand value and potentially lead to legal issues.

While harmful content is configured to be unprocessable by ChatGPT, it may still inadvertently produce harmful program code or fraudulent services, which are difficult to detect. Bearing this risk in mind, it is crucial to have a system in place to consistently review the content generated.

Understanding Terms of Service to Mitigate Risks When Using ChatGPT

When using ChatGPT for business purposes, it is essential to use it in accordance with OpenAI’s Terms of Service and Privacy Policy to mitigate risks. Since the Terms of Service are frequently updated, it is necessary to regularly check for amendments and ensure that you are familiar with the latest version when using it for business.

Related article: Explaining OpenAI’s Terms of Service: What to Watch Out for in Commercial Use?[ja]

Essential Measures to Avoid Risks in the Business Use of ChatGPT

Creating Internal Rules

To mitigate the risks associated with ChatGPT and ensure its proper use in business, companies are required to establish the following governance structures.

Creating Internal Rules

On May 1, 2023 (Reiwa 5), the Japanese Deep Learning Association (JDLA) compiled the ethical, legal, and social issues (ELSI) related to ChatGPT and released the “Guidelines for the Use of Generative AI.” Discussions on the development of guidelines are also progressing in various sectors, including industry, academia, and government.

When introducing ChatGPT into a company, it is essential not only to enhance individual information security literacy and provide internal education but also to establish your own ChatGPT usage guidelines. By formulating and thoroughly disseminating guidelines that articulate the rules for using ChatGPT within your company, you can expect to mitigate certain risks.

Reference: Japanese Deep Learning Association (JDLA) | “Guidelines for the Use of Generative AI[ja]

Appointing a Supervisor for ChatGPT Usage

Appointing an internal supervisor to oversee the use of ChatGPT, constantly monitor the operation of the guidelines, and manage risks is also an effective way to avoid risks.

Monitoring ChatGPT’s behavior, correcting generated results, and managing data can be likened to a system audit. A system audit objectively evaluates the efficiency, safety, and reliability of information systems and is conducted with the aim of streamlining operations and supporting organizational transformation. By appointing a supervisor to conduct audits, you can also strengthen transparency and accountability related to operations.

Summary: Risk Mitigation Measures are Essential for Business Use of ChatGPT

In this section, we have provided a detailed explanation of the risks associated with the business use of ChatGPT and the measures to mitigate them.

For rapidly evolving AI businesses like ChatGPT, it is crucial to implement various legal risk mitigation strategies, ranging from establishing internal usage guidelines to examining the legality of business models, drafting contracts and terms of use, protecting intellectual property rights, and addressing privacy concerns. These strategies necessitate collaboration with experts well-versed in AI technology.

Guidance on Measures by Our Firm

Monolith Law Office is a law firm with extensive experience in both IT, particularly the internet, and legal matters. The AI business is fraught with numerous legal risks, and the support of attorneys well-versed in AI-related legal issues is essential.

Our office provides a variety of legal support for AI businesses, including those involving ChatGPT, through a team of lawyers knowledgeable in AI and professionals such as engineers. Details are provided in the article below.

Areas of practice at Monolith Law Office: AI (including ChatGPT) Legal Services[ja]

Managing Attorney: Toki Kawase

The Editor in Chief: Managing Attorney: Toki Kawase

An expert in IT-related legal affairs in Japan who established MONOLITH LAW OFFICE and serves as its managing attorney. Formerly an IT engineer, he has been involved in the management of IT companies. Served as legal counsel to more than 100 companies, ranging from top-tier organizations to seed-stage Startups.

Category: IT

Tag:

Return to Top