Skip to content

ChatGPT Data Breaches: Timeline Upto Jan. 2024

chatgpt data breaches with timeline

ChatGPT is a large language model developed by OpenAI, based on the GPT-3.5 architecture. It uses deep learning techniques to understand and generate human-like responses to natural language inputs.

OpenAI announced the release of ChatGPT’s enterprise version in August 2023, featuring the faster GPT-4 model with an extended context window and customization options.

It has been trained on a massive amount of data and has a vast knowledge base, which allows it to provide useful and informative responses to a wide range of queries.

Over the past few years, data breaches have become increasingly prevalent, affecting even the largest corporations and organizations.

The ChatGPT data breach also sparked a wider conversation about the ethics of artificial intelligence and the responsibility of companies that develop these technologies.

As more companies adopt AI technologies to power their products and services, it is crucial for them to consider the potential risks and take measures to mitigate them.

Timeline with Most Recent ChatGPT-related News, Privacy Concerns & Data Breaches

December 2023: Researchers from Google DeepMind, Cornell University, and four other universities found that upon prompted ChatGPT to repeat certain words over and over again.

This can cause ChatGPT to leak training data, including personally identifiable information.
Researchers were able to extract over 10,000 unique verbatim memorized training examples from ChatGPT using only $200 worth of queries.

This is a privacy concern because it means that anyone with access to ChatGPT could potentially extract large amounts of personal data.

The attack described in the paper is specific to ChatGPT and does not work against other LLMs.

September 2023: The Personal Data Protection Office of Poland is investigating the breach of European data protection and privacy rules by Open AI’s ChatGPT.

An unnamed complainant has accused the firm of processing data in an unlawful and unreliable manner which is not transparent according to GDPR rules.

September 2023: Microsoft’s AI research team has accidentally exposed the 38 TB of open-source training data including a disk backup of two employees’ workstations while uploading a bucket on GitHub.

The open-source data backup includes secrets, private keys, passwords, and over 30,000 internal Microsoft teams messages. 

June 2022 – May 2023: According to Group-IB’s findings, Over 101,134 ChatGPT accounts were compromised by infamous Raccoon info stealers.

Stolen accounts were highest in the Asia-Pacific region as compared to other regions and India was identified as the most affected nation. The credentials were up for sale on illegal dark web marketplaces.

However, an OpenAI representative told Tom’s Hardware via email that “The findings from Group-IB’s Threat Intelligence report is the result of commodity malware on people’s devices and not an OpenAI breach,”

May 18, 2023: iPhone maker Apple inc has restricted the use of the AI Chatbot ChatGPT for its employees. The company is wary of confidential data leaks to chat apps. It also banned the use of GitHub’s Copilot which is also owned by Microsoft.

Apple is in the process of developing a similar chatbot for its users and may release it in the near future.

May 18, 2023: New York City revokes the ban on ChatGPT use in public schools, said Chancellor David Banks. “Our nation is potentially on the brink of a significant societal shift driven by generative artificial intelligence,” he wrote in an op-ed, published by Chalkbeat.

May 18, 2023: OpenAI has launched an official free ChatGPT app for iOS users. The app is only available in the US App Store for now and will be launched in a few other countries in the coming weeks.

May 16, 2023: OpenAI CEO Sam Altman testifies at a Senate hearing that AI should be regulated by a US or global agency.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” says Sam Altman before Sen. Richard Blumenthal, the Connecticut Democrat who chaired the Senate Judiciary Committee’s subcommittee on privacy, technology and the law on Capitol Hill in Washington.

May 3, 2023: The Pentagon’s AI Chief Craig Martell said, I’m scared to death,” about how people might use ChatGPT and other consumer-facing AI agents.

My fear is that we trust it too much without the providers of that service building into it the right safeguards and the ability for us to validate” the information, Martell said

May 3, 2023: Microsoft Corp. Chief Economist Michael Schwarz said “I am confident AI will be used by bad actors, and yes it will cause real damage,” at a World Economic Forum panel in Geneva. “It can do a lot of damage in the hands of spammers with elections and so on.”

Artificial Intelligence (AI) “clearly” must be regulated, he said, but lawmakers should be cautious and wait until the technology causes “real harm.”

May 3, 2023: Hackers using ChatGPT to spread malware on social media platforms like Facebook, Instagram and WhatsApp

Researchers at Meta, the parent company of Facebook, issued a warning that fraudulent groups such as Ducktail and NodeStealer are impersonating ChatGPT and similar tools to exploit people through malicious browser extensions, advertisements, and even various social media platforms.

Their objective is to execute unsanctioned advertisements from compromised business accounts throughout the internet.

Meta also finds that more than 1,000 domains share ChatGPT-themed malware which compromises users’ data.

May 2, 2023: Are you scared of leaking your personal data to ChatGPT? OpenAI is testing a private alternative.

Two individuals familiar with the forthcoming announcement have stated that Microsoft’s Azure cloud server division intends to offer a variant of ChatGPT that operates on dedicated cloud servers, which will store data separately from that of other customers so that no secrets are revealed to the main ChatGPT system.

May 2, 2023: Samsung bans the use of A.I. chatbot ChatGPT after discovering that employees were misusing the use of chatbot. The company had advised workers not to enter any personal or company-related information into the ChatGPT.

According to a recent company-wide survey conducted by Samsung, 65% of respondents expressed apprehension regarding security risks associated with the use of generative AI services.

April 28, 2023: The privacy watchdog of Italy reverses the ban on ChatGPT after OpenAI fulfilled the conditions set by Italy’s Garante Privacy. Now the services come with enhanced transparency and rights for European users and non-users.

April 26, 2023: OpenAI offers an option to users to turn off their ChatGPT conversations and chat history from being used for training the ChatGPT’s artificial intelligence models by clicking a toggle switch in their account settings. 

The move could be a privacy safeguard for people who sometimes share sensitive information with the popular AI chatbot.

April 26, 2023: During the RSA Conference, Greg Day, the VP and Global Field CISO of Cybereason, and Paul Vann, a student at the University of Virginia, revealed that numerous security issues still exist in ChatGPT’s GPT-4.

Additionally, they noted that its social engineering capabilities have advanced and are now capable of generating more authentic phishing emails and conversations.

According to them, the problems persist within ChatGPT, and both can be deceived into generating ransomware, disguising malware, and other forms of exploitation.

April 14, 2023: An investigation has been launched by Spain into the possibility of a data breach involving OpenAI’s ChatGPT.

Spain declared that it will initiate an inquiry into ChatGPT, a chatbot powered by artificial intelligence (AI). This announcement came on the same day that the European Union established a task force to encourage collaboration among European countries on this topic.

April 14, 2023: Researcher Simon Willison on his blog stated that AI chatbot ChatGPT is Vulnerable to Prompt Injection via YouTube Transcripts. The only injected prompts it was able to get working were telling a joke and Rickrolling.

April 11, 2023: After incidents of data breaches, Open AI has partnered with the crowdsourced security platform, Bugcrowd for the bug bounty program.

The Bug Bounty Program by OpenAI is designed to acknowledge and reward security researchers who help maintain the security of the company.

It will encourage security experts to report any security flaws, vulnerabilities or bugs they come across in Open AI systems. By doing so, they will play a significant part in ensuring that the company’s AI technology is secure for all users.

The rewards for identifying security vulnerabilities range from $200 for low-severity findings to as much as $20,000 for exceptional discoveries.

April 10, 2023: The Payment and Clearing Industry Association in China cautioned against the usage of OpenAI’s ChatGPT and other comparable artificial intelligence tools supported by Microsoft, citing potential dangers such as the leakage of cross-border data.

It also said that employees within the payment industry are required to adhere to laws and regulations while utilizing tools like ChatGPT, and they are prohibited from sharing any confidential information related to the finance industry or the country.

April 6, 2023: Samsung Employees Leaked Confidential Data to ChatGPT.

According to reports from the Korea Times, a Samsung worker allegedly copied the source code from a semiconductor database onto ChatGPT and requested assistance in locating a solution.

Additionally, another employee supposedly disclosed proprietary code in an attempt to rectify defective equipment, and a third employee purportedly submitted a complete meeting to the chatbot, asking it to generate minutes.

March 31, 2023: ChatGPT Is Banned in Italy Over Privacy Concerns.

The Italian authority in charge of data protection has accused OpenAI of illegally gathering personal information from users using ChatGPT and failing to implement an age verification system to protect minors from accessing inappropriate content.

Subsequently, the AI chatbot was blocked by a government order. Italy has become the first country to ban ChatGPT due to privacy concerns.

March 28, 2023: The newly launched PrivateGPT platform of Private AI has been designed to work together with OpenAI’s chatbot. It has the ability to automatically remove over 50 types of personally identifiable information (PII) in real time as users enter prompts for ChatGPT.

If you want to use ChatGPT without privacy concerns, then use PrivateGPT.

December 2022: Cybercriminals and threat actors start to use ChatGPT to get new malware codes.

American-Israeli multinational security firm, Check Point Research (CPR), detailed how successfully it was able to execute a complete infection process, starting from crafting a persuasive spear-phishing email to running a reverse shell that can receive English commands in the ChatGPT.

March 2023 – ChatGPT Data Breach

According to Security Week, OpenAI reported that a vulnerability in the Redis open-source library used by ChatGPT resulted in certain users having the ability to view the chat history titles of another active user, along with the possibility of accessing the initial message of a new conversation, provided that both users were active simultaneously.

The company acknowledged that between 1-10 a.m. PST on March 20, the bug may have resulted in the inadvertent exposure of payment-related details for premium ChatGPT users.

On March 20, Open AI was forced to take the ChatGPT bot offline for emergency maintenance to patch the bug.

Technical details of the data breach were explained by Open AI and the same is given below:

After conducting a thorough examination, they found that the above-mentioned privacy breach may have resulted in the inadvertent disclosure of payment-related details for 1.2% of the ChatGPT Plus users who were active within a particular nine-hour timeframe.

Prior to the temporary suspension of ChatGPT on Monday, certain individuals had the ability to view the first and last name, email address, payment address, credit card type, and the last four digits of a credit card number, as well as the credit card expiration date, of another user who was also online.

It’s important to note that full credit card numbers were never revealed.

When users log into the ChatGPT, in the “My Account” section under “Manage my subscription” between 1 a.m. to 10 a.m. Pacific time on March 20, a bug might have caused some subscription confirmation emails to be sent to the wrong recipients.

These emails included information regarding the credit card type and last four digits of another user’s credit card number but did not disclose the full credit card numbers.

Although there is no confirmation, it is possible that a few subscription confirmation emails were also incorrectly addressed prior to March 20.

Open AI has contacted users who may have been impacted to inform them about the potential exposure of their payment information to assure them that users’ data is not at risk currently.

Measures Taken By Open AI to Protecting Users’ Privacy and Keeping Their Data Safe

  • Tested the patch fix for the underlying bug extensively.
  • Incorporated additional validation measures to guarantee that the information retrieved from our Redis cache corresponds to the user who requested it.
  • Included additional verifications to ensure that the information retrieved from their Redis cache corresponds with the user who made the request.
  • Used programming techniques to analyze their logs and ensure that only the appropriate user has access to each message.
  • Cross-referenced multiple data sources to accurately pinpoint the impacted users, in order to inform them.
  • Enhanced logging will be implemented to detect the occurrence of this issue and ensure complete verification of its resolution.
  • They enhanced the resilience and scalability of their Redis cluster to decrease the probability of connection errors during high-traffic periods.

Method To Delete ChatGPT Account

Procedure to delete ChatGPT account if you feel your privacy is being beached then follow the steps given below:

  • Firstly, log in to your ChatGPT account with your credentials
  • Now, click on the “three dots” adjacent to your account name on the bottom left tab.
  • After that, click on the Settings tab.
  • A small window will appear on the computer screen, with two options, Theme and Data Controls.
  • Click on the “Show” option adjacent to the data controls.
  • Next important steps you should follow before you delete your account.
  • “Turn Off” the Chat History & Training and export your data which was previously stored in your ChatGPT account.
  • Subsequently, click on the “Delete Account” option located at the bottom of the window.
  • Finally, click on the “Permanently delete my account” button, you have now successfully deleted your ChatGPT account.

**Note: Account deletion is permanent and you can’t reverse it. Moreover, you won’t be able to reuse the same email address and phone number to create a new account.

In conclusion, the ChatGPT data breach was a wake-up call for both OpenAI and the wider tech industry.

While the incident did cause some disruption, OpenAI’s response demonstrated a commitment to user privacy and a willingness to take swift action to protect sensitive data.

It is now up to other companies to follow suit and ensure that they have the necessary measures in place to safeguard their users’ data.

Kevin James

Kevin James

I'm Kevin James, and I'm passionate about writing on Security and cybersecurity topics. Here, I'd like to share a bit more about myself. I hold a Bachelor of Science in Cybersecurity from Utica College, New York, which has been the foundation of my career in cybersecurity. As a writer, I have the privilege of sharing my insights and knowledge on a wide range of cybersecurity topics. You'll find my articles here at Cybersecurityforme.com, covering the latest trends, threats, and solutions in the field.