MONOLITH LAW OFFICE+81-3-6262-3248Weekdays 10:00-18:00 JST

MONOLITH LAW MAGAZINE

IT

Understanding the Information Leakage Risks of ChatGPT: Introducing Four Essential Countermeasures

IT

Understanding the Information Leakage Risks of ChatGPT: Introducing Four Essential Countermeasures

The generative AI known as “ChatGPT” has been garnering significant attention lately. Capable of tasks ranging from drafting documents and programming to creating musical scores and drawings, it has become a focal point across various fields.

The “GPT” in ChatGPT stands for “Generative Pre-training Transformer,” which, through pre-training on a vast array of text, images, and audio (with extensions), can engage in natural conversations akin to a human.

ChatGPT is in the spotlight as a tool that can handle complex tasks, offering the potential for task efficiency and cost-effectiveness. Its application is already advancing in numerous areas, and many tech companies are observed to be launching their own AI systems.

While AI technologies like ChatGPT present numerous business opportunities, they also come with potential risks such as copyright issues, the spread of misinformation, leaks of sensitive information, privacy concerns, and the potential for misuse in cyber-attacks.

In this article, our attorneys will discuss the risks of information leakage when using ChatGPT and the measures that should be taken to mitigate them.

Risks of Information Leakage Related to ChatGPT

When it comes to the risks associated with the implementation of ChatGPT in businesses, the following four points are primarily identified:

  • Security risks (information leakage, accuracy, vulnerability, availability, etc.)
  • Risks of copyright infringement
  • Risks of potential misuse (such as cyber-attacks)
  • Ethical challenges

The risk of information leakage with ChatGPT refers to the possibility that confidential information input into ChatGPT could be exposed to employees of OpenAI or other users, or used as training data.

According to OpenAI’s data usage policy, unless users route through the ‘API’ or apply for ‘opt-out’, any data input into ChatGPT by users can be collected and utilized (for learning) by OpenAI (this will be explained in more detail later).

Cases of Information Leakage Linked to the Use of ChatGPT

Here, we will introduce cases where the use of ChatGPT led to the leakage of registered personal information and entered confidential information.

Cases of Personal Information Leakage

On March 24, 2023, OpenAI announced that they had taken ChatGPT offline for a short period on March 20 due to a bug that caused some users to see the last four digits of another user’s credit card number, the card’s expiration date, and other personal information. The personal information leakage affected a portion of the “ChatGPT Plus” paid plan subscribers (approximately 1.2% of members).

In addition, it was also revealed that there was a bug that displayed other users’ chat histories in the chat logs at the same time as this bug.

In response, on March 31, 2023, the Italian Data Protection Authority issued an improvement order imposing a temporary restriction on the processing of data by Italian users, stating that there was a lack of legal basis for ChatGPT to collect and store personal data for learning purposes. Consequently, OpenAI blocked access to ChatGPT from Italy. This block was lifted on April 28 of the same year (2023) after OpenAI improved the handling of personal data.

Cases of Internal Confidential Information Leakage

In February 2023, the American cybersecurity company Cyberhaven released a report on the use of ChatGPT to its client companies.

According to the report, of the 1.6 million workers at customer companies using Cyberhaven products, 8.2% of knowledge workers had used ChatGPT at work at least once, and 3.1% of them had entered corporate confidential data into ChatGPT.

Furthermore, in a case from South Korea, on March 30, 2023, the Korean media “Economist” reported that a division within Samsung Electronics had allowed the use of ChatGPT, which led to the input of confidential information.

Despite Samsung Electronics’ efforts to raise awareness about internal information security, there were employees who entered program source codes and meeting details into the system.

Under such circumstances, while some countries and companies are imposing restrictions on the use of ChatGPT, others are adopting policies that recommend its use. When considering the introduction of ChatGPT, it is essential to understand the magnitude of the risk of information leakage.

Four Measures to Prevent Information Leakage with ChatGPT

Measures to Prevent Information Leakage with ChatGPT

Once information leakage occurs, it can lead to not only legal liabilities but also significant losses in trust and reputation. Therefore, it is crucial to establish and educate on an internal information management system to prevent such leaks.

Here, we would like to introduce four measures to prevent information leakage when using ChatGPT.

Countermeasure 1: Establishing Usage Rules

First, determine your company’s stance on ChatGPT and incorporate provisions regarding its use into your internal regulations. It is crucial to establish and operate under clear rules, such as not inputting personal or confidential information.

In doing so, it is advisable to develop your own ChatGPT usage guidelines. Additionally, ensure to include matters related to the use of ChatGPT in contracts with external parties.

On May 1, 2023, the Japan Deep Learning Association (JDLA) summarized the ethical, legal, and social issues (ELSI) of ChatGPT and published the “Guidelines for the Use of Generative AI.”

Various sectors, including industry, academia, and government, are also starting to consider the development of guidelines. By referencing these and formulating your own clear guidelines for ChatGPT usage, you can expect to mitigate certain risks.

Reference: Japan Deep Learning Association (JDLA) | Guidelines for the Use of Generative AI[ja]

However, if the established guidelines are not well communicated and enforced, they will be meaningless. Guidelines alone are insufficient as a measure.

Countermeasure 2: Establishing a System to Prevent Information Leakage

To prevent human errors, implementing a system known as DLP (Data Loss Prevention) can help safeguard against the transmission and copying of confidential information by preventing specific data leaks.

DLP constantly monitors data, not users, and automatically identifies and protects sensitive and critical data. With DLP, if confidential information is detected, it can trigger alerts or block actions.

While it is possible to prevent internal information leaks cost-effectively, a sophisticated understanding of security systems is required. Companies without a technical department may find it challenging to implement smoothly.

Strategy 3: Use Tools to Prevent Data Leakage

As mentioned above, a direct preventative measure is to apply for “opt-out” to refuse the collection of data entered into ChatGPT.

You can request to opt-out from the settings screen of ChatGPT. However, opting out means that your prompt history will not be saved, which many may find inconvenient.

Beyond the opt-out setting, another method is to implement tools that utilize ChatGPT’s “API”.

The “API” (Application Programming Interface) is an interface provided by OpenAI for incorporating ChatGPT into your own services or external tools. OpenAI has stated that information inputted and outputted through the ChatGPT “API” will not be used.

This is also explicitly stated in the terms of use for ChatGPT. According to the terms:

3. Content

(c) Use of Content to Improve Services

We do not use Content that you provide to or receive from our API (“API Content”) to develop or improve our Services. 

We may use Content from Services other than our API (“Non-API Content”) to help develop and improve our Services.

If you do not want your Non-API Content used to improve Services, you can opt out by filling out this form[ja]. Please note that in some cases this may limit the ability of our Services to better address your specific use case.

Source: OpenAI Official Site | ChatGPT Terms of Use[ja]

Strategy 4: Implementing In-House IT Literacy Training

In addition to the measures introduced so far, it is crucial to enhance the security literacy of your company’s employees through in-house training.

As seen in the case of Samsung Electronics, despite internal efforts to raise awareness about information security, the entry of confidential information led to a data breach. Therefore, it is advisable not only to prevent information leaks from a system perspective but also to conduct in-house training on knowledge about ChatGPT and IT literacy.

Responding to a Data Breach Incident with ChatGPT

Responding to a Data Breach Incident with ChatGPT

In the unfortunate event of a data breach, it is crucial to promptly investigate the facts and implement countermeasures.

If personal information is leaked, it is mandatory to report to the Personal Information Protection Commission under the Japanese Personal Information Protection Law (個人情報保護法). Additionally, it is necessary to notify the individuals affected by the incident. Should the leak of personal information infringe upon the rights or benefits of the other party, you may be liable for civil damages. Furthermore, if personal information is stolen or provided for illicit purposes, criminal liability may also be pursued.

In cases of trade secret or technical information leaks, measures such as deletion requests can be made to the recipient based on the Japanese Unfair Competition Prevention Law (不正競争防止法). If the leak of trade secrets or technical information results in unjust benefits for the other party, you may be liable for civil damages. Moreover, acquiring or using trade secrets or technical information through illicit means could lead to criminal liability.

If information is leaked in violation of professional confidentiality obligations, criminal liability may be incurred under the Penal Code or other laws. Additionally, if the leak causes damage to the other party due to a breach of professional confidentiality obligations, civil damages may be required.

Therefore, it is necessary to respond swiftly according to the nature of the leaked information, and it is vital to have a system in place in advance to handle such incidents.

Related article: What is Information Disclosure that Corporations Should Perform When a Data Breach Occurs?[ja]

Related article: What Should Corporations Do in Terms of Administrative Measures When a Personal Information Breach Occurs?[ja]

Summary: Establishing a Framework to Prepare for ChatGPT’s Information Leakage Risks

In this article, we have explained the information leakage risks associated with ChatGPT and the measures that should be taken to address them. In the rapidly evolving AI business landscape, which leverages technologies like ChatGPT, we recommend consulting with experienced attorneys who are well-versed in the legal risks involved. By doing so, you can proactively establish an internal framework tailored to these risks.

Not only for information leakage but also for ensuring the legality of AI-driven business models, drafting contracts and terms of use, protecting intellectual property rights, and addressing privacy concerns, partnering with attorneys who possess both knowledge and experience in AI and law will give you peace of mind.

Guidance on Measures by Our Firm

Monolith Law Office is a law firm with extensive experience in both IT, particularly the internet, and legal matters. The AI business is fraught with numerous legal risks, and the support of attorneys well-versed in AI-related legal issues is essential. Our firm provides sophisticated legal support for AI businesses, including those involving ChatGPT, through a team of AI-knowledgeable attorneys and engineers. Our services include contract drafting, legality reviews of business models, intellectual property protection, and privacy compliance. Please refer to the article below for more details.

Areas of practice at Monolith Law Office: AI (including ChatGPT) Legal Services[ja]

Managing Attorney: Toki Kawase

The Editor in Chief: Managing Attorney: Toki Kawase

An expert in IT-related legal affairs in Japan who established MONOLITH LAW OFFICE and serves as its managing attorney. Formerly an IT engineer, he has been involved in the management of IT companies. Served as legal counsel to more than 100 companies, ranging from top-tier organizations to seed-stage Startups.

Category: IT

Tag:

Return to Top