OpenAI, the mastermind behind ChatGPT, has recently pulled the plug on its AI Classifier, a tool designed to detect whether a text was penned by a human or an AI. OpenAI’s decision to shut down the AI Classifier was discreet, announced through a small update in a blog post.
AI Classifier, born amidst a surge of AI-generated content, was designed to unmask the true origin of a piece of text. From students using AI to craft their thesis to bloggers populating the internet with AI-written articles, the tool was a much-needed watchdog. But alas, the watchdog has now been silenced.
The reason behind this sudden demise? A disappointing performance scorecard. OpenAI admitted that the AI Classifier could only correctly identify AI-written text 26% of the time. Even more concerning, it detected false positives 9% of the time, mislabeling human-written text as AI-generated and vice versa.
The tool’s limitations didn’t stop there. It often categorized text as “unclear or possible” when it couldn’t definitively determine if the content was generated by ChatGPT or another AI. It also labeled some content as “highly unlikely” to have been written by a human. OpenAI also emphasized that the tool required more than 1,000 characters to be somewhat reliable, but even longer texts were sometimes mislabeled.
While the AI Classifier chapter has closed, OpenAI is not giving up. They are committed to working on a more reliable and precise tool, aiming to help users understand whether content, be it audio, video, or written, is generated by AI.
But the question lingers: If OpenAI, the creator of ChatGPT, can’t detect its own creation, what does that mean for the future of AI detection? For now, alternatives like GPTZero are available, but the mystery remains.