OpenAI had to pull the ChatGPT down for emergency maintenance recently after a system bug allowed users to examine the chat history of other users. While investigating the incident, the company discovered that the bug had a more serious aspect than just allowing users to view chat history.
According to OpenAI, the bug could have exposed the personal information of 1.2% of ChatGPT Plus subscribers, who pay $20/month for extended access to the AI chatbot. Specifically, the bug unintentionally displayed payment-related information for active subscribers during a specific 9-hour period on Monday, March 20th. Certain users may even see the first and last names, email addresses, payment destinations, last four digits of credit card numbers (only), and credit card expiration dates of other users. However, OpenAI noted that the complete credit card number was never leaked.
The company downplayed the possibility of a breach, stating that only users who met specific criteria during the 9-hour window were at risk. OpenAI also stated that it would add redundant checks to library calls, programmatically examine logs to ensure all messages are only available to the correct user and protect against future recurrences. The company has also contacted affected users to alert them to the issue.
This incident raises concerns about the security of AI chatbots and the potential for unintended data leaks. ChatGPT gained popularity quickly after its launch in late November, reaching its first million users in less than a week. The chatbot’s success led Microsoft to invest “billions of dollars” in OpenAI at the end of January. While the incident with the chatbot was limited to a small percentage of subscribers, it serves as a reminder to companies developing AI technologies to prioritize security and take proactive measures to prevent data breaches.