What are the Leakage Risks of ChatGPT? : Introducing Four Essential Countermeasures

The generative AI tool known as “ChatGPT” has recently attracted significant attention. t can draft text, generate program code, and even create musical scores and illustrations, and is rapidly being adopted across a wide range of fields.
The “GPT” in ChatGPT stands for “Generative Pre-training Transformer,” a model that engages in natural, human‑like conversation by pre‑training on large volumes of text, image, and audio data. Because ChatGPT can handle complex tasks, it is widely viewed as a tool that improves task efficiency and offers strong cost-effectiveness, with many technology companies developing their own AI systems.
While AI technologies like ChatGPT present numerous business opportunities, they also come with potential risks such as copyright issues, the spread of misinformation, confidential information leaks, privacy concerns, and the potential for misuse in cyber-attacks.
In this article, our attorneys will discuss the risks of information leakage associated with ChatGPT and the measures that should be taken to mitigate them.
Risks of Information Leakage Related to ChatGPT

When it comes to the risks associated with the implementation of ChatGPT in businesses, the following four points are primarily identified:
- Security risks (information leakage, accuracy, vulnerabilities, and related concerns)
- Risks of copyright infringement
- Risks of potential misuse (such as cyber-attacks)
- Ethical challenges
The risk of information leakage with ChatGPT refers to the possibility that confidential information input into ChatGPT could be exposed to OpenAI personnel or other users, or used as training data.
According to OpenAI’s Usage policy, user data entered into ChatGPT may be collected and used for training unless the user accesses the service through the API or applies for an opt‑out, as discussed below.
Cases of Information Leakage Linked to the Use of ChatGPT
This section introduces examples in which personal data entered into ChatGPT was exposed, and cases in which internal confidential information was leaked through ChatGPT use.
Cases Involving Personal Data Leakage
On March 24, 2023, OpenAI announced that it had taken ChatGPT offline on March 20 due to a bug that caused some users to see other users’ personal information, including the last four digits of credit card numbers and card expiration dates. The incident affected a portion of “ChatGPT Plus” paid‑plan subscribers, estimated at about 1.2 percent of members.
OpenAI also disclosed that, at the same time, another bug caused some users’ chat histories to display other users’ conversations in the chat log. In response, on March 31, 2023, the Italian data protection authority issued an improvement order, imposing a temporary restriction on data processing for users in Italy on the ground that there was no legal basis for ChatGPT’s collection and storage of personal data for training purposes. OpenAI then blocked access to ChatGPT from Italy until April 28, 2023, when the block was lifted after OpenAI improved its handling of personal data.
Cases Involving Internal Confidential Information Leakage
In February 2023, the US cybersecurity company Cyberhaven released a report on the use of ChatGPT to its client companies.
According to the report, of the 1.6 million workers at customer companies using Cyberhaven products, 8.2% of knowledge workers had used ChatGPT at work at least once, and 3.1% of them had entered corporate confidential data into ChatGPT.
In a separate case in South Korea, the media outlet Economist reported on March 30, 2023, that a division within Samsung Electronics had permitted ChatGPT use, which led to employees entering confidential information. This included employees inputting program source code and meeting details, despite internal efforts at Samsung Electronics to raise awareness of information security.
Under such circumstances, some countries and companies have moved to restrict ChatGPT, while others have adopted policies that encourage its use. When considering whether to introduce ChatGPT, companies should carefully assess the magnitude of the information‑leakage risk.
Four Measures to Prevent Information Leakage When Using ChatGPT

Once information leakage occurs, it can lead to not only legal liabilities but also significant losses in trust and reputation. Therefore, it is crucial to build and operate an internal information management framework, including appropriate employee education, to prevent leaks.
This section outlines four key measures that can help reduce information‑leakage risks when using ChatGPT.
Countermeasure 1: Establishing Usage Rules
First, the company should determine its position on ChatGPT and reflect that position in its internal regulations. It is important to establish and operate under clear rules, such as a prohibition on entering personal data or confidential information, and to implement practical controls that ensure compliance with those rules.
In doing so, it is advisable to draft internal ChatGPT usage guidelines that are tailored to the company’s operations. Contracts with external counterparties should also address ChatGPT use, such as whether and how ChatGPT may be used in connection with the services or deliverables.
On May 1, 2023, the Japan Deep Learning Association (JDLA) summarized the ethical, legal, and social issues (ELSI) of ChatGPT and published the “Guidelines for the Use of Generative AI.”
Various sectors, including industry, academia, and government stakeholders, have also begun considering the development of their own guidelines, and companies can refer to these when preparing internal rules. By formulating clear, written internal guidelines on ChatGPT use, a company can reasonably expect to avoid at least some risks, provided those guidelines are effectively implemented.
Reference: Japan Deep Learning Association (JDLA) | Guidelines for the Use of Generative AI[ja]
However, guidelines alone are not sufficient if they are not properly communicated and enforced. A guideline that is drafted but not disseminated or monitored will have little practical effect as a risk‑mitigation measure.
Countermeasure 2: Build Systems to Prevent Information Leakage
To prevent human errors, implementing a system known as DLP (Data Loss Prevention) can help safeguard against the transmission or copying of confidential information designed to prevent specific data leaks.
DLP systems monitor data, rather than users, on a continuous basis, and automatically identify and protect confidential or important data. When DLP detects confidential information, it can issue alerts and block certain user operations.
While it is possible to prevent internal information leaks cost-effectively, a sophisticated understanding of security systems is required. Companies without a dedicated technical department may face challenges in adopting DLP smoothly.
Countermeasure 3: Use Tools That Prevent Data Leakage
As mentioned above, a direct preventative measure is to apply for an “opt-out” to refuse the collection of data entered into ChatGPT. Users can request this opt‑out from the ChatGPT settings screen, although doing so prevents prompt history from being saved, which many users may find inconvenient.
Beyond the opt-out setting, another method is to implement tools that utilize ChatGPT’s “API”. The “API” (Application Programming Interface) is an interface provided by OpenAI that allows ChatGPT to be integrated into a company’s own services or external tools. OpenAI has stated that it does not use information input or output through the ChatGPT API.
This is also explicitly stated in ChatGPT’s Terms of Use:
3. Content
(c) Use of Content to Improve Services
We do not use Content that you provide to or receive from our API (“API Content”) to develop or improve our Services.
We may use Content from Services other than our API (“Non-API Content”) to help develop and improve our Services.
If you do not want your Non-API Content used to improve Services, you can opt out by filling out this form[ja]. Please note that in some cases this may limit the ability of our Services to better address your specific use case.
Source: OpenAI Official Site | ChatGPT Terms of Use[ja]
Countermeasure 4: Conduct In‑House IT Literacy Training
In addition to the measures introduced so far, it is crucial to improve employees’ security literacy through internal training. In the Samsung Electronics example, confidential information was entered into ChatGPT despite internal efforts to raise awareness of information security, and this behavior led directly to the information leak.
Therefore, companies should not rely solely on system‑based controls to prevent information leakage. It is also desirable to conduct regular in‑house training on ChatGPT and related IT literacy topics so that employees understand the risks and proper handling of data when using such tools.
Responding When an Information Leak Occurs via ChatGPT

In the unfortunate event of a data breach, it is crucial to promptly investigate the facts and implement countermeasures.
If personal data is leaked, companies are required, under Japan’s Personal Information Protection Act, to report the incident to the Personal Information Protection Commission and to notify the affected individuals. If the leak of personal data infringes the rights or interests of those individuals, the company may be liable for civil damages, and if personal data is stolen or provided for an improper purpose, criminal liability may also arise.
In cases of trade secret or technical information leaks, the company may under Japan’s Unfair Competition Prevention Act, request measures such as deletion from the party that received the leaked information. If the leak of trade secrets or technical information results in unjust benefits for a counterparty, the company may face civil liability for damages, and acquiring or using such information through improper means can also give rise to criminal liability.
If information is leaked in violation of professional confidentiality obligations, criminal liability may arise under Japan’s Penal Code or other statutes. In addition, if a breach of professional confidentiality obligations causes damage to another party, the company or responsible individual may be liable for civil damages.
For these reasons, companies must respond quickly in line with the nature of the leaked information and should build internal structures and procedures in advance so that they can act without delay when an incident occurs
Related article: What Companies Should Disclose in the Event of a Data Breach?
Related article: What to Do in the Event of a Personal Data Breach? Explanation of Administrative Measures Companies Should Take
Summary: Establishing a Framework to Prepare for ChatGPT’s Information Leakage Risks
This article has outlined the information‑leakage risks associated with ChatGPT and the countermeasures companies should consider. In AI‑driven businesses that make use of rapidly evolving tools such as ChatGPT, it is advisable to consult experienced attorneys who are well versed in the legal risks in order to establish internal structures that are proportionate to those risks in advance.
In addition to information‑leakage issues, companies should seek legal support for assessing the legality of AI‑based business models, drafting contracts and terms of use, protecting intellectual property rights, and addressing privacy and data‑protection requirements. Working with attorneys who have both legal and AI expertise allows companies to pursue AI business opportunities with greater confidence.
Guidance on Measures by Our Firm
Monolith Law Office is a law firm with extensive experience in both IT, particularly the internet, and legal matters. The AI business is fraught with numerous legal risks, and the support of attorneys well-versed in AI-related legal issues is essential. Our firm provides sophisticated legal support for AI businesses, including those involving ChatGPT, through a team of AI-knowledgeable attorneys and engineers. Our services include contract drafting, legality reviews of business models, intellectual property protection, and privacy compliance. Please refer to the article below for more details.
Areas of practice at Monolith Law Office: AI (including ChatGPT) Legal Services
Category: IT
Tag: ITTerms of Use




















